[jira] [Commented] (HBASE-22138) [hbase-thirdparty] Undo our direct dependence on protos in google.protobuf.Any in Procedure.proto

2019-04-03 Thread Balazs Meszaros (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809569#comment-16809569
 ] 

Balazs Meszaros commented on HBASE-22138:
-

[~stack] yes it is for the shell and the ui. Procedures are deserialized by 
{{ProcedureStateSerializer}} implementations. If the serialized procedure does 
not have \{{state_message}} field then we deserialize its {{state_data}} if 
that was the question.

> [hbase-thirdparty] Undo our direct dependence on protos in 
> google.protobuf.Any in Procedure.proto
> -
>
> Key: HBASE-22138
> URL: https://issues.apache.org/jira/browse/HBASE-22138
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Priority: Major
> Fix For: thirdparty-2.3.0
>
>
> in our shaded jar, we've bundled a few unshaded google protos. We make use of 
> these protos in some of our core classes. What is needed is a bit of careful 
> work undoing our dependence on these types being careful to unsure we don't 
> break compatibility (it should be fine but needs some careful operation).
> I've targeted this at the next version of hbase-thirdparty.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809555#comment-16809555
 ] 

Hudson commented on HBASE-20911:


Results for branch branch-1
[build #751 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/751/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/751//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/751//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/751//JDK8_Nightly_Build_Report_(Hadoop2)/]




(x) {color:red}-1 source release artifact{color}
-- See build output for details.


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809553#comment-16809553
 ] 

Guanghao Zhang commented on HBASE-22163:


Now TRSP didn't guarantee that the region will finally be on the target server. 
Is it useful to warmup region? And as the normal TRSP will check the region 
state in regionnode lock. Maybe we should move warmup to TRSP inside?

> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0, 2.3.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter region will can't open because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22060) postOpenDeployTasks could send OPENED region transition state to the wrong master

2019-04-03 Thread Bahram Chehrazy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bahram Chehrazy updated HBASE-22060:

Attachment: 
Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes-v2.patch

> postOpenDeployTasks could send OPENED region transition state to the wrong 
> master
> -
>
> Key: HBASE-22060
> URL: https://issues.apache.org/jira/browse/HBASE-22060
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, proc-v2
>Affects Versions: 3.0.0
>Reporter: Bahram Chehrazy
>Assignee: Duo Zhang
>Priority: Critical
> Attachments: 
> Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes-v2.patch
>
>
> As was reported in HBASE-21788, we have repeatedly seen regions getting stuck 
> in OPENING after master restarts. Here is one scenario that I've observed 
> recently:
>  
> 1) There is a region is transit (RIT).
> 2) The active master aborts and begins shutting down.
> 3) The backup master becomes active quickly, finds the RIT, creates 
> OpenRegionProcedure and send request to some server.
> 4) The server quickly opens the region and posts OPENED state transition, but 
> it uses its cached master instead of the new one. 
> 5) The old active master which had not completely shutdown its assignment 
> manager yet, notes the OPENED state report and ignores it. Because no 
> corresponding procedure can be found.
> 6) The new master waits forever for a response to its OPEN region request.
>  
> This happens more often with the meta region because it's small and takes a 
> few seconds to open. Below are some related logs:
> *Previous HMaster:*
> 2019-03-14 13:19:16,310 ERROR [PEWorker-1] master.HMaster: * ABORTING 
> master ,17000,1552438242232: Shutting down HBase cluster: file 
> system not available *
> 2019-03-14 13:19:16,310 INFO [PEWorker-1] regionserver.HRegionServer: * 
> STOPPING region server ',17000,1552438242232' *
> 2019-03-14 13:20:54,358 WARN 
> [RpcServer.priority.FPBQ.Fifo.handler=11,queue=1,port=17000] 
> assignment.AssignmentManager: No matching procedure found for rit=OPEN, 
> location=*,17020,1552561955412, table=hbase:meta, 
> region=1588230740 transition to OPENED
> 2019-03-14 13:20:55,707 INFO [master/:17000] 
> assignment.AssignmentManager: Stopping assignment manager
> *New HMaster logs:*
> 2019-03-14 13:19:16,907 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Deleting ZNode for 
> /HBaseServerZnodeCommonDir/**/backup-masters/,17000,1552438259871
>  from backup master directory
> 2019-03-14 13:19:17,031 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Registered as active 
> master=,17000,1552438259871
> 2019-03-14 13:20:52,017 INFO [PEWorker-12] zookeeper.MetaTableLocator: 
> Setting hbase:meta (replicaId=0) location in ZooKeeper as 
> ,17020,1552536956826
> 2019-03-14 13:20:52,105 INFO [PEWorker-12] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[\{pid=178230, ppid=178229, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
>  
> *HServer logs:*
> 2019-03-14 13:20:52,708 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Open hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,353 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegion: Opened 1588230740; next sequenceid=229166
> 2019-03-14 13:20:54,356 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegionServer: Post open deploy tasks for 
> hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,358 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Opened hbase:meta,,1.1588230740
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22060) postOpenDeployTasks could send OPENED region transition state to the wrong master

2019-04-03 Thread Bahram Chehrazy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bahram Chehrazy updated HBASE-22060:

Attachment: (was: 
Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes.patch)

> postOpenDeployTasks could send OPENED region transition state to the wrong 
> master
> -
>
> Key: HBASE-22060
> URL: https://issues.apache.org/jira/browse/HBASE-22060
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, proc-v2
>Affects Versions: 3.0.0
>Reporter: Bahram Chehrazy
>Assignee: Duo Zhang
>Priority: Critical
> Attachments: 
> Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes-v2.patch
>
>
> As was reported in HBASE-21788, we have repeatedly seen regions getting stuck 
> in OPENING after master restarts. Here is one scenario that I've observed 
> recently:
>  
> 1) There is a region is transit (RIT).
> 2) The active master aborts and begins shutting down.
> 3) The backup master becomes active quickly, finds the RIT, creates 
> OpenRegionProcedure and send request to some server.
> 4) The server quickly opens the region and posts OPENED state transition, but 
> it uses its cached master instead of the new one. 
> 5) The old active master which had not completely shutdown its assignment 
> manager yet, notes the OPENED state report and ignores it. Because no 
> corresponding procedure can be found.
> 6) The new master waits forever for a response to its OPEN region request.
>  
> This happens more often with the meta region because it's small and takes a 
> few seconds to open. Below are some related logs:
> *Previous HMaster:*
> 2019-03-14 13:19:16,310 ERROR [PEWorker-1] master.HMaster: * ABORTING 
> master ,17000,1552438242232: Shutting down HBase cluster: file 
> system not available *
> 2019-03-14 13:19:16,310 INFO [PEWorker-1] regionserver.HRegionServer: * 
> STOPPING region server ',17000,1552438242232' *
> 2019-03-14 13:20:54,358 WARN 
> [RpcServer.priority.FPBQ.Fifo.handler=11,queue=1,port=17000] 
> assignment.AssignmentManager: No matching procedure found for rit=OPEN, 
> location=*,17020,1552561955412, table=hbase:meta, 
> region=1588230740 transition to OPENED
> 2019-03-14 13:20:55,707 INFO [master/:17000] 
> assignment.AssignmentManager: Stopping assignment manager
> *New HMaster logs:*
> 2019-03-14 13:19:16,907 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Deleting ZNode for 
> /HBaseServerZnodeCommonDir/**/backup-masters/,17000,1552438259871
>  from backup master directory
> 2019-03-14 13:19:17,031 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Registered as active 
> master=,17000,1552438259871
> 2019-03-14 13:20:52,017 INFO [PEWorker-12] zookeeper.MetaTableLocator: 
> Setting hbase:meta (replicaId=0) location in ZooKeeper as 
> ,17020,1552536956826
> 2019-03-14 13:20:52,105 INFO [PEWorker-12] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[\{pid=178230, ppid=178229, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
>  
> *HServer logs:*
> 2019-03-14 13:20:52,708 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Open hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,353 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegion: Opened 1588230740; next sequenceid=229166
> 2019-03-14 13:20:54,356 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegionServer: Post open deploy tasks for 
> hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,358 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Opened hbase:meta,,1.1588230740
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22060) postOpenDeployTasks could send OPENED region transition state to the wrong master

2019-04-03 Thread Bahram Chehrazy (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bahram Chehrazy updated HBASE-22060:

Attachment: (was: 
Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes-v2.patch)

> postOpenDeployTasks could send OPENED region transition state to the wrong 
> master
> -
>
> Key: HBASE-22060
> URL: https://issues.apache.org/jira/browse/HBASE-22060
> Project: HBase
>  Issue Type: Bug
>  Components: amv2, proc-v2
>Affects Versions: 3.0.0
>Reporter: Bahram Chehrazy
>Assignee: Duo Zhang
>Priority: Critical
> Attachments: 
> Reset-cached-rssStub-on-region-servers-as-soon-as-the-master-changes-v2.patch
>
>
> As was reported in HBASE-21788, we have repeatedly seen regions getting stuck 
> in OPENING after master restarts. Here is one scenario that I've observed 
> recently:
>  
> 1) There is a region is transit (RIT).
> 2) The active master aborts and begins shutting down.
> 3) The backup master becomes active quickly, finds the RIT, creates 
> OpenRegionProcedure and send request to some server.
> 4) The server quickly opens the region and posts OPENED state transition, but 
> it uses its cached master instead of the new one. 
> 5) The old active master which had not completely shutdown its assignment 
> manager yet, notes the OPENED state report and ignores it. Because no 
> corresponding procedure can be found.
> 6) The new master waits forever for a response to its OPEN region request.
>  
> This happens more often with the meta region because it's small and takes a 
> few seconds to open. Below are some related logs:
> *Previous HMaster:*
> 2019-03-14 13:19:16,310 ERROR [PEWorker-1] master.HMaster: * ABORTING 
> master ,17000,1552438242232: Shutting down HBase cluster: file 
> system not available *
> 2019-03-14 13:19:16,310 INFO [PEWorker-1] regionserver.HRegionServer: * 
> STOPPING region server ',17000,1552438242232' *
> 2019-03-14 13:20:54,358 WARN 
> [RpcServer.priority.FPBQ.Fifo.handler=11,queue=1,port=17000] 
> assignment.AssignmentManager: No matching procedure found for rit=OPEN, 
> location=*,17020,1552561955412, table=hbase:meta, 
> region=1588230740 transition to OPENED
> 2019-03-14 13:20:55,707 INFO [master/:17000] 
> assignment.AssignmentManager: Stopping assignment manager
> *New HMaster logs:*
> 2019-03-14 13:19:16,907 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Deleting ZNode for 
> /HBaseServerZnodeCommonDir/**/backup-masters/,17000,1552438259871
>  from backup master directory
> 2019-03-14 13:19:17,031 INFO [master/:17000:becomeActiveMaster] 
> master.ActiveMasterManager: Registered as active 
> master=,17000,1552438259871
> 2019-03-14 13:20:52,017 INFO [PEWorker-12] zookeeper.MetaTableLocator: 
> Setting hbase:meta (replicaId=0) location in ZooKeeper as 
> ,17020,1552536956826
> 2019-03-14 13:20:52,105 INFO [PEWorker-12] procedure2.ProcedureExecutor: 
> Initialized subprocedures=[\{pid=178230, ppid=178229, state=RUNNABLE, 
> hasLock=false; org.apache.hadoop.hbase.master.assignment.OpenRegionProcedure}]
>  
> *HServer logs:*
> 2019-03-14 13:20:52,708 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Open hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,353 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegion: Opened 1588230740; next sequenceid=229166
> 2019-03-14 13:20:54,356 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> regionserver.HRegionServer: Post open deploy tasks for 
> hbase:meta,,1.1588230740
> 2019-03-14 13:20:54,358 INFO [RS_CLOSE_META-regionserver/:17020-0] 
> handler.AssignRegionHandler: Opened hbase:meta,,1.1588230740
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809547#comment-16809547
 ] 

Hudson commented on HBASE-20911:


Results for branch branch-1.2
[build #719 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/719/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/719//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/719//JDK7_Nightly_Build_Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.2/719//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-22163:
---
Description: 
Found this problem when run ITBLL for branch-2.2.

The master log shows that it try to move a splited parent region.
{code:java}
2019-04-03,12:56:13,635 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
parent=a777f8fab6d17d37aaec842ba6035ad5, 
daughterA=634feb79a583480597e1843647d11228, 
daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec

2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running balancer
{code}
And the regionserver will try to open the stores of this region. After 
HBASE-20724, the compated store file will be archived when open the stroes. But 
there are still have reference for these file in daughter regions' store dir. 
Then the daughter region will can't open because file not found.

 

  was:
Found this problem when run ITBLL for branch-2.2.

The master log shows that it try to move a splited parent region.
{code:java}
2019-04-03,12:56:13,635 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
parent=a777f8fab6d17d37aaec842ba6035ad5, 
daughterA=634feb79a583480597e1843647d11228, 
daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec

2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running balancer
{code}
And the regionserver will try to open the stores of this region. After 
HBASE-20724, the compated store file will be archived when open the stroes. But 
there are still have reference for these file in daughter regions' store dir. 
Then the daughter will can't open region because file not found.

 


> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0, 2.3.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter region will can't open because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-22163:
---
Description: 
Found this problem when run ITBLL for branch-2.2.

The master log shows that it try to move a splited parent region.
{code:java}
2019-04-03,12:56:13,635 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
parent=a777f8fab6d17d37aaec842ba6035ad5, 
daughterA=634feb79a583480597e1843647d11228, 
daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec

2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running balancer
{code}
And the regionserver will try to open the stores of this region. After 
HBASE-20724, the compated store file will be archived when open the stroes. But 
there are still have reference for these file in daughter regions' store dir. 
Then the daughter will can't open region because file not found.

 

  was:
Found this problem when run ITBLL for branch-2.2.

The master log shows that it try to move a splited parent region.
{code:java}
2019-04-03,12:56:13,635 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
parent=a777f8fab6d17d37aaec842ba6035ad5, 
daughterA=634feb79a583480597e1843647d11228, 
daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec

2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running balancer
{code}
And the regionserver will try to open the stores of this region. After 
HBASE-20724, the compated store file will be archived when open the stroes. But 
there are still have reference for these file in daughter regions' store dir. 
Then the daughter will can't open because file not found.

 


> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0, 2.3.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter will can't open region because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang reassigned HBASE-22163:
--

Assignee: Guanghao Zhang

> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0, 2.3.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter will can't open because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22164) Add a warn log when rs report failed open region

2019-04-03 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-22164:
--

 Summary: Add a warn log when rs report failed open region
 Key: HBASE-22164
 URL: https://issues.apache.org/jira/browse/HBASE-22164
 Project: HBase
  Issue Type: Improvement
Reporter: Guanghao Zhang


Will make the debug easily...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-22163:
---
Affects Version/s: 2.3.0
   2.2.0
   3.0.0

> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.2.0, 2.3.0
>Reporter: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter will can't open because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22163) May send a warmup rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-22163:
---
Summary: May send a warmup rpc for a splited parent region  (was: May send 
a WARMUP rpc for a splited parent region)

> May send a warmup rpc for a splited parent region
> -
>
> Key: HBASE-22163
> URL: https://issues.apache.org/jira/browse/HBASE-22163
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Priority: Major
>
> Found this problem when run ITBLL for branch-2.2.
> The master log shows that it try to move a splited parent region.
> {code:java}
> 2019-04-03,12:56:13,635 INFO 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
> state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
> parent=a777f8fab6d17d37aaec842ba6035ad5, 
> daughterA=634feb79a583480597e1843647d11228, 
> daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec
> 2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
> Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
> source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running 
> balancer
> {code}
> And the regionserver will try to open the stores of this region. After 
> HBASE-20724, the compated store file will be archived when open the stroes. 
> But there are still have reference for these file in daughter regions' store 
> dir. Then the daughter will can't open because file not found.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22163) May send a WARMUP rpc for a splited parent region

2019-04-03 Thread Guanghao Zhang (JIRA)
Guanghao Zhang created HBASE-22163:
--

 Summary: May send a WARMUP rpc for a splited parent region
 Key: HBASE-22163
 URL: https://issues.apache.org/jira/browse/HBASE-22163
 Project: HBase
  Issue Type: Bug
Reporter: Guanghao Zhang


Found this problem when run ITBLL for branch-2.2.

The master log shows that it try to move a splited parent region.
{code:java}
2019-04-03,12:56:13,635 INFO 
org.apache.hadoop.hbase.procedure2.ProcedureExecutor: Finished pid=2823, 
state=SUCCESS; SplitTableRegionProcedure table=IntegrationTestBigLinkedList, 
parent=a777f8fab6d17d37aaec842ba6035ad5, 
daughterA=634feb79a583480597e1843647d11228, 
daughterB=e370e6013ad35b83471671f2374278ab in 2.5930sec

2019-04-03,12:56:23,344 INFO org.apache.hadoop.hbase.master.HMaster: 
Client=hdfs_tst_admin//10.142.58.8 move hri=a777f8fab6d17d37aaec842ba6035ad5, 
source=, destination=c4-hadoop-tst-st30.bj,29100,1554265874772, running balancer
{code}
And the regionserver will try to open the stores of this region. After 
HBASE-20724, the compated store file will be archived when open the stroes. But 
there are still have reference for these file in daughter regions' store dir. 
Then the daughter will can't open because file not found.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21688) Address WAL filesystem issues

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809532#comment-16809532
 ] 

Hudson commented on HBASE-21688:


Results for branch branch-2.1
[build #1019 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Address WAL filesystem issues
> -
>
> Key: HBASE-21688
> URL: https://issues.apache.org/jira/browse/HBASE-21688
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: s3
> Fix For: 3.0.0, 2.2.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-21688-branch-2.0-v1.patch, 
> HBASE-21688-branch-2.1-v2.patch, HBASE-21688-branch-2.2-v1.patch, 
> HBASE-21688-master-addendum.patch, HBASE-21688-v1.patch
>
>
> Scan and fix code base to use new way of instantiating WAL File System. 
> https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809533#comment-16809533
 ] 

Hudson commented on HBASE-20911:


Results for branch branch-2.1
[build #1019 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1019//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22159) ByteBufferIOEngine should support write off-heap ByteBuff to the bufferArray

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809531#comment-16809531
 ] 

Hadoop QA commented on HBASE-22159:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
34s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} HBASE-21879 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
18s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
29s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
7s{color} | {color:green} HBASE-21879 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} HBASE-21879 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} hbase-common generated 0 new + 131 unchanged - 1 
fixed = 131 total (was 132) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
42s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
20s{color} | {color:red} hbase-common: The patch generated 3 new + 0 unchanged 
- 6 fixed = 3 total (was 6) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 19s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
42s{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 28m 48s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 76m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hbase.io.hfile.bucket.TestExclusiveMemoryMmapEngine |
|   | hadoop.hbase.io.hfile.bucket.TestByteBufferIOEngine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-22159 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964809/HBASE-22159.HBASE-21879.v2.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoo

[jira] [Commented] (HBASE-21688) Address WAL filesystem issues

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809523#comment-16809523
 ] 

Hudson commented on HBASE-21688:


Results for branch branch-2.0
[build #1488 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1488/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1488//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1488//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1488//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> Address WAL filesystem issues
> -
>
> Key: HBASE-21688
> URL: https://issues.apache.org/jira/browse/HBASE-21688
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: s3
> Fix For: 3.0.0, 2.2.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-21688-branch-2.0-v1.patch, 
> HBASE-21688-branch-2.1-v2.patch, HBASE-21688-branch-2.2-v1.patch, 
> HBASE-21688-master-addendum.patch, HBASE-21688-v1.patch
>
>
> Scan and fix code base to use new way of instantiating WAL File System. 
> https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809520#comment-16809520
 ] 

stack commented on HBASE-22155:
---

hmm... just  
TEST-org.apache.hadoop.hbase.mapreduce.TestMultithreadedTableMapper.xml.[failed-to-read]
 failed?


> Move 2.2.0 on to hbase-thirdparty-2.2.0
> ---
>
> Key: HBASE-22155
> URL: https://issues.apache.org/jira/browse/HBASE-22155
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155.branch-2.2.001.patch
>
>
> hbase-thirdparty-2.2.0 was just released. The 2.2.0 RM ([~zghaobac]) gave his 
> blessing in parent issue that 2.2.0 should use thirdparty 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] hadoop-yetus commented on issue #110: For testing github PR

2019-04-03 Thread GitBox
hadoop-yetus commented on issue #110: For testing github PR
URL: https://github.com/apache/hbase/pull/110#issuecomment-479754495
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 236 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ HBASE-22147 Compile Tests _ |
   | +1 | mvninstall | 269 | HBASE-22147 passed |
   | +1 | compile | 55 | HBASE-22147 passed |
   | +1 | checkstyle | 33 | HBASE-22147 passed |
   | +1 | shadedjars | 252 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 63 | HBASE-22147 passed |
   | +1 | javadoc | 22 | HBASE-22147 passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 235 | the patch passed |
   | +1 | compile | 52 | the patch passed |
   | +1 | javac | 52 | the patch passed |
   | +1 | checkstyle | 30 | the patch passed |
   | +1 | whitespace | 1 | The patch has no whitespace issues. |
   | +1 | shadedjars | 256 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 1298 | Patch does not cause any errors with Hadoop 
2.7.1 2.7.2 2.7.3 2.7.4 or 3.0.0. |
   | +1 | findbugs | 67 | the patch passed |
   | +1 | javadoc | 22 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 196 | hbase-client in the patch passed. |
   | +1 | asflicense | 11 | The patch does not generate ASF License warnings. |
   | | | 3169 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-110/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/110 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 9bd9ebfb6720 4.4.0-137-generic #163-Ubuntu SMP Mon Sep 24 
13:14:43 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | HBASE-22147 / 604e7e2346 |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-110/1/testReport/
 |
   | Max. process+thread count | 305 (vs. ulimit of 1) |
   | modules | C: hbase-client U: hbase-client |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-110/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22155:
--
Attachment: HBASE-22155-branch-2.2-001.patch

> Move 2.2.0 on to hbase-thirdparty-2.2.0
> ---
>
> Key: HBASE-22155
> URL: https://issues.apache.org/jira/browse/HBASE-22155
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155.branch-2.2.001.patch
>
>
> hbase-thirdparty-2.2.0 was just released. The 2.2.0 RM ([~zghaobac]) gave his 
> blessing in parent issue that 2.2.0 should use thirdparty 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809517#comment-16809517
 ] 

Hudson commented on HBASE-20911:


Results for branch branch-1.3
[build #708 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/708/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/708//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/708//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.3/708//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22162) HBase connection refused after random time delays

2019-04-03 Thread Melanka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melanka updated HBASE-22162:

Description: 
I have installed Hadoop single node 
[http://intellitech.pro/tutorial-hadoop-first-lab/ 
|http://intellitech.pro/tutorial-hadoop-first-lab/]and Hbase 
[http://intellitech.pro/hbase-installation-on-ubuntu/]  successfully. I am 
using a Java agent to connect to the Hbase. After a random time period Hbase 
stop working and the java agent gives following error message.
{code:java}
Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false, 
msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection 
exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201, details=row 
'xxx,001:155390400,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1{code}
Here are the Hbase and zookeeper logs

hbase-hduser-regionserver-db-2.log
{code:java}
[main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089 DEBUG 
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading reply 
sessionid:0x169bd98c099006e, packet:: clientPath:null serverPath:null 
finished:false header:: 1,2 replyHeader:: 1,300964,0 request:: 
'/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1 response:: 
null{code}
hbase-hduser-zookeeper-db-2.log
{code:java}
server.FinalRequestProcessor: sessionid:0x169bd98c099004a type:getChildren 
cxid:0x28e3ad zxid:0xfffe txntype:unknown 
reqpath:/hbase/splitWAL{code}
my hbase-site.xml file is as follows
{code:java}

 //Here you have to set the path where you want HBase to store its files.
 
 hbase.rootdir
 hdfs://localhost:9000/hbase
 
 
 hbase.zookeeper.quorum
 localhost
 
 //Here you have to set the path where you want HBase to store its built in 
zookeeper files.
 
 hbase.zookeeper.property.dataDir
 ${hbase.tmp.dir}/zookeeper
 
 
 hbase.cluster.distributed
 true
 
 
 hbase.zookeeper.property.clientPort
 2181
 
 {code}
when I restart the Hbase it will start working again and stop working after few 
days. I am wondering what would be the fix for this.

Thanks.

  was:
I have installed Hadoop single node 
[http://intellitech.pro/tutorial-hadoop-first-lab/] [] 
|http://intellitech.pro/tutorial-hadoop-first-lab/]and 
[[Hbase||http://intellitech.pro/hbase-installation-on-ubuntu/] 
[http://intellitech.pro/hbase-installation-on-ubuntu/] 
[]|http://intellitech.pro/hbase-installation-on-ubuntu/] successfully. I am 
using a Java agent to connect to the Hbase. After a random time period Hbase 
stop working and the java agent gives following error message.
{code:java}
Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false, 
msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection 
exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201, details=row 
'xxx,001:155390400,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1{code}
Here are the Hbase and zookeeper logs

hbase-hduser-regionserver-db-2.log
{code:java}
[main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089 DEBUG 
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading reply 
sessionid:0x169bd98c099006e, packet:: clientPath:null serverPath:null 
finished:false header:: 1,2 replyHeader:: 1,300964,0 request:: 
'/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1 response:: 
null{code}
hbase-hduser-zookeeper-db-2.log
{code:java}
server.FinalRequestProcessor: sessionid:0x169bd98c099004a type:getChildren 
cxid:0x28e3ad zxid:0xfffe txntype:unknown 
reqpath:/hbase/splitWAL{code}
my hbase-site.xml file is as follows
{code:java}

 //Here you have to set the path where you want HBase to store its files.
 
 hbase.rootdir
 hdfs://localhost:9000/hbase
 
 
 hbase.zookeeper.quorum
 localhost
 
 //Here you have to set the path where you want HBase to store its built in 
zookeeper files.
 
 hbase.zookeeper.property.dataDir
 ${hbase.tmp.dir}/zookeeper
 
 
 hbase.cluster.distributed
 true
 
 
 hbase.zookeeper.property.clientPort
 2181
 
 {code}
when I restart the Hbase it will start working again and stop working after few 
days. I am wondering what would be the fix for this.

Thanks.


> HBase connection refused after random time delays
> -
>
> Key: HBASE-22162
> URL: https://issues.apache.org/jira/browse/HBASE-22162
> Project: HBase
>  Issue Type: Bug
>  Components: hadoop2, hbase-connectors, java
>Affects Versions

[jira] [Updated] (HBASE-22162) HBase connection refused after random time delays

2019-04-03 Thread Melanka (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Melanka updated HBASE-22162:

Description: 
I have installed Hadoop single node 
[http://intellitech.pro/tutorial-hadoop-first-lab/] [] 
|http://intellitech.pro/tutorial-hadoop-first-lab/]and 
[[Hbase||http://intellitech.pro/hbase-installation-on-ubuntu/] 
[http://intellitech.pro/hbase-installation-on-ubuntu/] 
[]|http://intellitech.pro/hbase-installation-on-ubuntu/] successfully. I am 
using a Java agent to connect to the Hbase. After a random time period Hbase 
stop working and the java agent gives following error message.
{code:java}
Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false, 
msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection 
exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201, details=row 
'xxx,001:155390400,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1{code}
Here are the Hbase and zookeeper logs

hbase-hduser-regionserver-db-2.log
{code:java}
[main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089 DEBUG 
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading reply 
sessionid:0x169bd98c099006e, packet:: clientPath:null serverPath:null 
finished:false header:: 1,2 replyHeader:: 1,300964,0 request:: 
'/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1 response:: 
null{code}
hbase-hduser-zookeeper-db-2.log
{code:java}
server.FinalRequestProcessor: sessionid:0x169bd98c099004a type:getChildren 
cxid:0x28e3ad zxid:0xfffe txntype:unknown 
reqpath:/hbase/splitWAL{code}
my hbase-site.xml file is as follows
{code:java}

 //Here you have to set the path where you want HBase to store its files.
 
 hbase.rootdir
 hdfs://localhost:9000/hbase
 
 
 hbase.zookeeper.quorum
 localhost
 
 //Here you have to set the path where you want HBase to store its built in 
zookeeper files.
 
 hbase.zookeeper.property.dataDir
 ${hbase.tmp.dir}/zookeeper
 
 
 hbase.cluster.distributed
 true
 
 
 hbase.zookeeper.property.clientPort
 2181
 
 {code}
when I restart the Hbase it will start working again and stop working after few 
days. I am wondering what would be the fix for this.

Thanks.

  was:
I have installed [[Hadoop single 
node||http://intellitech.pro/tutorial-hadoop-first-lab/] 
[http://intellitech.pro/tutorial-hadoop-first-lab/] [] 
|http://intellitech.pro/tutorial-hadoop-first-lab/]and 
[[Hbase||http://intellitech.pro/hbase-installation-on-ubuntu/] 
[http://intellitech.pro/hbase-installation-on-ubuntu/] 
[]|http://intellitech.pro/hbase-installation-on-ubuntu/] successfully. I am 
using a Java agent to connect to the Hbase. After a random time period Hbase 
stop working and the java agent gives following error message.
{code:java}
Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false, 
msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection 
exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201, details=row 
'xxx,001:155390400,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1{code}
Here are the Hbase and zookeeper logs

hbase-hduser-regionserver-db-2.log
{code:java}
[main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089 DEBUG 
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading reply 
sessionid:0x169bd98c099006e, packet:: clientPath:null serverPath:null 
finished:false header:: 1,2 replyHeader:: 1,300964,0 request:: 
'/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1 response:: 
null{code}
hbase-hduser-zookeeper-db-2.log
{code:java}
server.FinalRequestProcessor: sessionid:0x169bd98c099004a type:getChildren 
cxid:0x28e3ad zxid:0xfffe txntype:unknown 
reqpath:/hbase/splitWAL{code}
my hbase-site.xml file is as follows
{code:java}

 //Here you have to set the path where you want HBase to store its files.
 
 hbase.rootdir
 hdfs://localhost:9000/hbase
 
 
 hbase.zookeeper.quorum
 localhost
 
 //Here you have to set the path where you want HBase to store its built in 
zookeeper files.
 
 hbase.zookeeper.property.dataDir
 ${hbase.tmp.dir}/zookeeper
 
 
 hbase.cluster.distributed
 true
 
 
 hbase.zookeeper.property.clientPort
 2181
 
 {code}

 when I restart the Hbase it will start working again and stop working after 
few days. I am wondering what would be the fix for this.

Thanks.


> HBase connection refused after random time delays
> -
>
> Key: HBASE-22162
> URL: https:

[jira] [Created] (HBASE-22162) HBase connection refused after random time delays

2019-04-03 Thread Melanka (JIRA)
Melanka created HBASE-22162:
---

 Summary: HBase connection refused after random time delays
 Key: HBASE-22162
 URL: https://issues.apache.org/jira/browse/HBASE-22162
 Project: HBase
  Issue Type: Bug
  Components: hadoop2, hbase-connectors, java
Affects Versions: 1.2.5
 Environment: Google Cloud VM
Reporter: Melanka


I have installed [[Hadoop single 
node||http://intellitech.pro/tutorial-hadoop-first-lab/] 
[http://intellitech.pro/tutorial-hadoop-first-lab/] [] 
|http://intellitech.pro/tutorial-hadoop-first-lab/]and 
[[Hbase||http://intellitech.pro/hbase-installation-on-ubuntu/] 
[http://intellitech.pro/hbase-installation-on-ubuntu/] 
[]|http://intellitech.pro/hbase-installation-on-ubuntu/] successfully. I am 
using a Java agent to connect to the Hbase. After a random time period Hbase 
stop working and the java agent gives following error message.
{code:java}
Call exception, tries=7, retries=7, started=8321 ms ago, cancelled=false, 
msg=Call to db-2.c.xxx-dev.internal/xx.xx.0.21:16201 failed on connection 
exception: 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannel$AnnotatedConnectException:
 Connection refused: db-2.c.xxx-dev.internal/xx.xx.0.21:16201, details=row 
'xxx,001:155390400,99' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, 
hostname=db-2.c.xxx-dev.internal,16201,1553683263844, seqNum=-1{code}
Here are the Hbase and zookeeper logs

hbase-hduser-regionserver-db-2.log
{code:java}
[main] zookeeper.ZooKeeperMain: Processing delete 2019-03-30 02:11:44,089 DEBUG 
[main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Reading reply 
sessionid:0x169bd98c099006e, packet:: clientPath:null serverPath:null 
finished:false header:: 1,2 replyHeader:: 1,300964,0 request:: 
'/hbase/rs/db-2.c.stl-cardio-dev.internal%2C16201%2C1553683263844,-1 response:: 
null{code}
hbase-hduser-zookeeper-db-2.log
{code:java}
server.FinalRequestProcessor: sessionid:0x169bd98c099004a type:getChildren 
cxid:0x28e3ad zxid:0xfffe txntype:unknown 
reqpath:/hbase/splitWAL{code}
my hbase-site.xml file is as follows
{code:java}

 //Here you have to set the path where you want HBase to store its files.
 
 hbase.rootdir
 hdfs://localhost:9000/hbase
 
 
 hbase.zookeeper.quorum
 localhost
 
 //Here you have to set the path where you want HBase to store its built in 
zookeeper files.
 
 hbase.zookeeper.property.dataDir
 ${hbase.tmp.dir}/zookeeper
 
 
 hbase.cluster.distributed
 true
 
 
 hbase.zookeeper.property.clientPort
 2181
 
 {code}

 when I restart the Hbase it will start working again and stop working after 
few days. I am wondering what would be the fix for this.

Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809506#comment-16809506
 ] 

Hadoop QA commented on HBASE-22144:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
17s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m 15s{color} 
| {color:red} hbase-server generated 1 new + 193 unchanged - 1 fixed = 194 
total (was 194) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
13s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m  5s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
22s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}322m 31s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}373m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
|   | hadoop.hbase.client.TestAsyncTableAdminApi |
|   | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.namespace.TestNamespaceAuditor |
|   | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.replication.TestReplicationKillSlaveRSWithSeparateOldWALs |
|   | hadoop.hbase.replication.TestReplicationSmallTestsSync |
|   | hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.client.replication.TestReplicationAdminWithClusters |
|   | hadoop.hbase.tool.TestLoa

[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809496#comment-16809496
 ] 

Hadoop QA commented on HBASE-22086:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m  
2s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  3m  7s{color} 
| {color:red} hbase-server generated 1 new + 193 unchanged - 1 fixed = 194 
total (was 194) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
33s{color} | {color:red} hbase-client: The patch generated 2 new + 2 unchanged 
- 0 fixed = 4 total (was 2) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m  
9s{color} | {color:red} hbase-server: The patch generated 8 new + 1 unchanged - 
0 fixed = 9 total (was 1) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 41s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
17s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}320m 10s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}371m  8s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestSnapshotTemporaryDirectory |
|   | hadoop.hbase.client.TestAsyncTableAdminApi |
|   | hadoop.hbase.master.TestAssignmentManagerMetrics |
|   | hadoop.hbase.tool.TestSecureLoadIncrementalHFiles |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.master.procedure.TestTruncateTableProcedure |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.client.replication.

[jira] [Comment Edited] (HBASE-22147) Integrate the github pull request with hadoop QA.

2019-04-03 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809491#comment-16809491
 ] 

Zheng Hu edited comment on HBASE-22147 at 4/4/19 4:24 AM:
--

We're still working on this, and seems [~Apache9]'s testing PR[1] can trigger 
hadoop QA[2] now, still wait the hadoop QA's feedback to comment under github 
PR.  

1. https://github.com/apache/hbase/pull/110;
2. 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/view/change-requests/job/PR-110/


was (Author: openinx):
We're still working on this, and seems [~Apache9]'s testing PR[1] can trigger 
hadoop QA now, still wait the hadoop QA's feedback to comment under github PR.  

1. https://github.com/apache/hbase/pull/110

> Integrate the github pull request with hadoop QA.
> -
>
> Key: HBASE-22147
> URL: https://issues.apache.org/jira/browse/HBASE-22147
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22147) Integrate the github pull request with hadoop QA.

2019-04-03 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809491#comment-16809491
 ] 

Zheng Hu edited comment on HBASE-22147 at 4/4/19 4:20 AM:
--

We're still working on this, and seems [~Apache9]'s testing PR[1] can trigger 
hadoop QA now, still wait the hadoop QA's feedback to comment under github PR.  

1. https://github.com/apache/hbase/pull/110


was (Author: openinx):
We're still working on this, and seems [~Apache9]'s testing PR can trigger 
hadoop QA now, still wait the hadoop QA's feedback to comment under github PR.  

> Integrate the github pull request with hadoop QA.
> -
>
> Key: HBASE-22147
> URL: https://issues.apache.org/jira/browse/HBASE-22147
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22147) Integrate the github pull request with hadoop QA.

2019-04-03 Thread Zheng Hu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809491#comment-16809491
 ] 

Zheng Hu commented on HBASE-22147:
--

We're still working on this, and seems [~Apache9]'s testing PR can trigger 
hadoop QA now, still wait the hadoop QA's feedback to comment under github PR.  

> Integrate the github pull request with hadoop QA.
> -
>
> Key: HBASE-22147
> URL: https://issues.apache.org/jira/browse/HBASE-22147
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache9 opened a new pull request #110: For testing github PR

2019-04-03 Thread GitBox
Apache9 opened a new pull request #110: For testing github PR
URL: https://github.com/apache/hbase/pull/110
 
 
   Check whether we could trigger a yetus build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809483#comment-16809483
 ] 

Hudson commented on HBASE-20911:


Results for branch branch-1.4
[build #728 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/728/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/728//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/728//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/728//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809482#comment-16809482
 ] 

Hadoop QA commented on HBASE-22155:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
29s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
39s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
57s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
51s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
38s{color} | {color:green} branch-2.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
47s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 53s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}202m 16s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}263m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-22155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964787/HBASE-22155-branch-2.2-001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 89e6b3c08ab5 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.2 / 51d801d7fb |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639/testReport/ |
| Max. process+thread count | 5454 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16639/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Move 2.2.0 on to hbase-thirdparty-2.2.0
> ---
>
> Key: HBASE-22155
> URL: https://issues.apache.org/jira/browse/HBASE-22155
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Reporter: stack
>Assignee: stack
>Pr

[jira] [Updated] (HBASE-22159) ByteBufferIOEngine should support write off-heap ByteBuff to the bufferArray

2019-04-03 Thread Zheng Hu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zheng Hu updated HBASE-22159:
-
Attachment: HBASE-22159.HBASE-21879.v2.patch

> ByteBufferIOEngine should support write off-heap ByteBuff to the bufferArray
> 
>
> Key: HBASE-22159
> URL: https://issues.apache.org/jira/browse/HBASE-22159
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Attachments: HBASE-22159.HBASE-21879.v1.patch, 
> HBASE-22159.HBASE-21879.v2.patch
>
>
> In ByteBufferIOEngine , we have the assert: 
> {code}
>   @Override
>   public void write(ByteBuffer srcBuffer, long offset) throws IOException {
> assert srcBuffer.hasArray();
> bufferArray.putMultiple(offset, srcBuffer.remaining(), srcBuffer.array(),
> srcBuffer.arrayOffset());
>   }
>   @Override
>   public void write(ByteBuff srcBuffer, long offset) throws IOException {
> // When caching block into BucketCache there will be single buffer 
> backing for this HFileBlock.
> // This will work for now. But from the DFS itself if we get DBB then 
> this may not hold true.
> assert srcBuffer.hasArray();
> bufferArray.putMultiple(offset, srcBuffer.remaining(), srcBuffer.array(),
> srcBuffer.arrayOffset());
>   }
> {code}
> Should remove the assert, and allow to write off-heap ByteBuff to bufferArray.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22128) Move namespace region then master crashed make deadlock

2019-04-03 Thread Bing Xiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Xiao updated HBASE-22128:
--
Attachment: (was: HBASE-22128-branch-2.1-v5.patch)

> Move namespace region then master crashed make deadlock
> ---
>
> Key: HBASE-22128
> URL: https://issues.apache.org/jira/browse/HBASE-22128
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 2.0.6, 2.1.5
>
> Attachments: HBASE-22128-branch-2.1-v1.patch, 
> HBASE-22128-branch-2.1-v2.patch, HBASE-22128-branch-2.1-v3.patch, 
> HBASE-22128-branch-2.1-v4.patch, HBASE-22128-branch-2.1-v5.patch
>
>
> When move namespace region start unassign produre, after unassign procedure 
> finished namespace region will be offline. At the same time master crashed 
> then reboot will stuck, because master init is block by waiting namespace 
> table online ,at same time master init not finish so move region procedure 
> can not go on, make deadlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22128) Move namespace region then master crashed make deadlock

2019-04-03 Thread Bing Xiao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bing Xiao updated HBASE-22128:
--
Attachment: HBASE-22128-branch-2.1-v5.patch

> Move namespace region then master crashed make deadlock
> ---
>
> Key: HBASE-22128
> URL: https://issues.apache.org/jira/browse/HBASE-22128
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.5, 2.1.4
>Reporter: Bing Xiao
>Assignee: Bing Xiao
>Priority: Critical
> Fix For: 2.0.6, 2.1.5
>
> Attachments: HBASE-22128-branch-2.1-v1.patch, 
> HBASE-22128-branch-2.1-v2.patch, HBASE-22128-branch-2.1-v3.patch, 
> HBASE-22128-branch-2.1-v4.patch, HBASE-22128-branch-2.1-v5.patch
>
>
> When move namespace region start unassign produre, after unassign procedure 
> finished namespace region will be offline. At the same time master crashed 
> then reboot will stuck, because master init is block by waiting namespace 
> table online ,at same time master init not finish so move region procedure 
> can not go on, make deadlock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22152) Create a jenkins file for yetus to processing GitHub PR

2019-04-03 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809467#comment-16809467
 ] 

Duo Zhang commented on HBASE-22152:
---

Pushed the image to branch HBASE-22147.

Let me try to set up a jenkins job, according to the guide here

https://effectivemachines.com/2019/01/24/using-apache-yetus-with-jenkins-and-github-part-1/

> Create a jenkins file for yetus to processing GitHub PR
> ---
>
> Key: HBASE-22152
> URL: https://issues.apache.org/jira/browse/HBASE-22152
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Duo Zhang
>Priority: Major
> Attachments: HBASE-22152.patch
>
>
> I think we can just copy the jenkinsfile from the hadoop project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22158) RawAsyncHBaseAdmin.getTableSplits should filter out none default replicas

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22158:
--
Attachment: HBASE-22158.patch

> RawAsyncHBaseAdmin.getTableSplits should filter out none default replicas
> -
>
> Key: HBASE-22158
> URL: https://issues.apache.org/jira/browse/HBASE-22158
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0, 2.1.5
>
> Attachments: HBASE-22158.patch, HBASE-22158.patch
>
>
> The getRegions method will return all the replicas for a table, so if we want 
> to get the splits information, we need to filter out none default replicas 
> otherwise we will get duplicated startKeys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22157) Include the cause when constructing RestoreSnapshotException in restoreSnapshot

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22157:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.3.0
   2.2.0
   3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-2.2+.

Thanks [~openinx] for reviewing.

> Include the cause when constructing RestoreSnapshotException in 
> restoreSnapshot
> ---
>
> Key: HBASE-22157
> URL: https://issues.apache.org/jira/browse/HBASE-22157
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22157.patch
>
>
> When implementing HBASE-21718, a snapshot related UT fails because of the 
> incorrect cause of RestoreSnapshotException. Finally I found that we just a 
> create a RestoreSnapshotException without providing the cause in 
> restoreSnapshot.
> We should fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22157) Include the cause when constructing RestoreSnapshotException in restoreSnapshot

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22157:
--
Component/s: Admin

> Include the cause when constructing RestoreSnapshotException in 
> restoreSnapshot
> ---
>
> Key: HBASE-22157
> URL: https://issues.apache.org/jira/browse/HBASE-22157
> Project: HBase
>  Issue Type: Sub-task
>  Components: Admin
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22157.patch
>
>
> When implementing HBASE-21718, a snapshot related UT fails because of the 
> incorrect cause of RestoreSnapshotException. Finally I found that we just a 
> create a RestoreSnapshotException without providing the cause in 
> restoreSnapshot.
> We should fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20912) Add import order config in dev support for eclipse

2019-04-03 Thread Tak Lon (Stephen) Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809457#comment-16809457
 ] 

Tak Lon (Stephen) Wu commented on HBASE-20912:
--

AFA I investigated in HBASE-20837, this {{eclipse.importorder}} file need to be 
added independently from {{hbase_eclipse_formatter.xml}}

> Add import order config in dev support for eclipse
> --
>
> Key: HBASE-20912
> URL: https://issues.apache.org/jira/browse/HBASE-20912
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: eclipse.importorder
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809455#comment-16809455
 ] 

Hudson commented on HBASE-20911:


SUCCESS: Integrated in Jenkins build HBase-1.3-IT #535 (See 
[https://builds.apache.org/job/HBase-1.3-IT/535/])
HBASE-20911 correct Swtich/case indentation in formatter template for 
(apurtell: 
[https://github.com/apache/hbase/commit/6e39205a3f20a924277a930e26fa7bc22e4d9cd8])
* (edit) dev-support/hbase_eclipse_formatter.xml


> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-22007) Add restoreSnapshot and cloneSnapshot with acl methods in AsyncAdmin

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang resolved HBASE-22007.
---
   Resolution: Fixed
Fix Version/s: 2.2.0

> Add restoreSnapshot and cloneSnapshot with acl methods in AsyncAdmin
> 
>
> Key: HBASE-22007
> URL: https://issues.apache.org/jira/browse/HBASE-22007
> Project: HBase
>  Issue Type: Task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22007-v1.patch, HBASE-22007-v1.patch, 
> HBASE-22007.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (HBASE-22007) Add restoreSnapshot and cloneSnapshot with acl methods in AsyncAdmin

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang reopened HBASE-22007:
---

Cherry pick to branch-2.2.

> Add restoreSnapshot and cloneSnapshot with acl methods in AsyncAdmin
> 
>
> Key: HBASE-22007
> URL: https://issues.apache.org/jira/browse/HBASE-22007
> Project: HBase
>  Issue Type: Task
>  Components: Admin, asyncclient, Client
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-22007-v1.patch, HBASE-22007-v1.patch, 
> HBASE-22007.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22133) Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to branch-2.2+

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-22133:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to branch-2.2+.

Thanks [~zghaobac] for reviewing.

> Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to 
> branch-2.2+
> ---
>
> Key: HBASE-22133
> URL: https://issues.apache.org/jira/browse/HBASE-22133
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22133-v1.patch, HBASE-22133.patch
>
>
> The RIT procedure has been changed for branch-2.2+ so we can not use the 
> patch for branch-2.1 directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21428) Performance issue due to userRegionLock in the ConnectionManager.

2019-04-03 Thread koo (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809446#comment-16809446
 ] 

koo commented on HBASE-21428:
-

[~stack] 

I am still using the old version. 
Changing to the past is difficult. This is because the performance will be too 
slow. 
Let's take a look at the possibility of creating a Custom HTableMultiplexer 
version.

thanks. :)

> Performance issue due to userRegionLock in the ConnectionManager.
> -
>
> Key: HBASE-21428
> URL: https://issues.apache.org/jira/browse/HBASE-21428
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: koo
>Priority: Major
>
> My service is that execute a lot of puts using HTableMultiplexer.
> After the version change, most of the requests are rejected.
> It works fine in 1.2.6.1, but there is a problem in 1.2.7.
> This issue is related with the HBASE-19260.
> Most of my threads are using a lot of time as below.
>  
> |"Worker-972" #2479 daemon prio=5 os_prio=0 tid=0x7f8cea86b000 nid=0x4c8c 
> waiting on condition [0x7f8b78104000]
>  java.lang.Thread.State: WAITING (parking)
>  at sun.misc.Unsafe.park(Native Method)
>  - parking to wait for <0x0005dd703b78> (a 
> java.util.concurrent.locks.ReentrantLock$NonfairSync)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
>  at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
>  at 
> java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:209)
>  at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
>  at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1274)
>  at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1186)
>  at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1170)
>  at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1127)
>  at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getRegionLocation(ConnectionManager.java:962)
>  at 
> org.apache.hadoop.hbase.client.HTableMultiplexer.put(HTableMultiplexer.java:206)
>  at 
> org.apache.hadoop.hbase.client.HTableMultiplexer.put(HTableMultiplexer.java:150)|
>  
> When I looked at the issue(HBASE-19260), I recognized the dangerous of to 
> allow accessessing multiple threads.
> However, Already create many threads with the limitations
> I think it is very inefficient to allow only one thread access.
>  
> | this.metaLookupPool = getThreadPool(
>  conf.getInt("hbase.hconnection.meta.lookup.threads.max", 128),
>  conf.getInt("hbase.hconnection.meta.lookup.threads.core", 10),
>  "-metaLookup-shared-", new LinkedBlockingQueue());|
>  
> I want to suggest changing it that allow to have multiple locks.(but not the 
> entire thread)
> The following is pseudocode.
>  
> |int lockSize = conf.getInt("hbase.hconnection.meta.lookup.threads.max", 128) 
> / 2;
> BlockingQueue userRegionLockQueue = new 
> LinkedBlockingQueue();
>  for (int i=0; i   userRegionLockQueue.put(new ReentrantLock());
>  }|
>  
> thanks.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22133) Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to branch-2.2+

2019-04-03 Thread Guanghao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809444#comment-16809444
 ] 

Guanghao Zhang commented on HBASE-22133:


+1

> Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to 
> branch-2.2+
> ---
>
> Key: HBASE-22133
> URL: https://issues.apache.org/jira/browse/HBASE-22133
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22133-v1.patch, HBASE-22133.patch
>
>
> The RIT procedure has been changed for branch-2.2+ so we can not use the 
> patch for branch-2.1 directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809438#comment-16809438
 ] 

Hadoop QA commented on HBASE-15560:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
31s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
25s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue} 16m 
43s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 8s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
21s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 11m 42s{color} 
| {color:red} root generated 1 new + 1376 unchanged - 1 fixed = 1377 total (was 
1377) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
46s{color} | {color:red} root: The patch generated 2 new + 55 unchanged - 1 
fixed = 57 total (was 56) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
6s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  5m 
56s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
45s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 16s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded . {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}309m 
48s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  2m 
14s{color} | {color:green} The patch does not gene

[jira] [Commented] (HBASE-22150) rssStub in HRegionServer is not thread safe and should not directly be used

2019-04-03 Thread Sergey Shelukhin (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809430#comment-16809430
 ] 

Sergey Shelukhin commented on HBASE-22150:
--

checkstyle is for inline if, I can fix on commit
+1 will commit tomorrow

> rssStub in HRegionServer is not thread safe and should not directly be used
> ---
>
> Key: HBASE-22150
> URL: https://issues.apache.org/jira/browse/HBASE-22150
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Bahram Chehrazy
>Assignee: Bahram Chehrazy
>Priority: Major
> Attachments: 
> rssStub-is-not-thread-safe-hence-should-not-be-accessed-directly.patch
>
>
> While working on a patch for HBASE-22060, I noticed that a unit test started 
> failing because region server crashed with NPE during initialization and 
> after aborting the master. It turned out that access to the rssStub was not 
> synchronized.
> The existing code in HRegionServer already takes care of this fact by copying 
> and null checking in most places, but there are a couple ones that are not so 
> careful. Those are in reportForDuty and abort methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22150) rssStub in HRegionServer is not thread safe and should not directly be used

2019-04-03 Thread Sergey Shelukhin (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin reassigned HBASE-22150:


Assignee: Bahram Chehrazy  (was: Sergey Shelukhin)

> rssStub in HRegionServer is not thread safe and should not directly be used
> ---
>
> Key: HBASE-22150
> URL: https://issues.apache.org/jira/browse/HBASE-22150
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Bahram Chehrazy
>Assignee: Bahram Chehrazy
>Priority: Major
> Attachments: 
> rssStub-is-not-thread-safe-hence-should-not-be-accessed-directly.patch
>
>
> While working on a patch for HBASE-22060, I noticed that a unit test started 
> failing because region server crashed with NPE during initialization and 
> after aborting the master. It turned out that access to the rssStub was not 
> synchronized.
> The existing code in HRegionServer already takes care of this fact by copying 
> and null checking in most places, but there are a couple ones that are not so 
> careful. Those are in reportForDuty and abort methods. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21718) Implement Admin based on AsyncAdmin

2019-04-03 Thread Duo Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-21718:
--
Attachment: HBASE-21718-HBASE-21512-v4.patch

> Implement Admin based on AsyncAdmin
> ---
>
> Key: HBASE-21718
> URL: https://issues.apache.org/jira/browse/HBASE-21718
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Attachments: HBASE-21718-HBASE-21512-v1.patch, 
> HBASE-21718-HBASE-21512-v2.patch, HBASE-21718-HBASE-21512-v3.patch, 
> HBASE-21718-HBASE-21512-v4.patch, HBASE-21718-HBASE-21512.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22133) Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to branch-2.2+

2019-04-03 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809424#comment-16809424
 ] 

Duo Zhang commented on HBASE-22133:
---

Oh, forget this one. [~zghaobac] [~stack] Any concerns? Thanks.

> Forward port HBASE-22073 "/rits.jsp throws an exception if no procedure" to 
> branch-2.2+
> ---
>
> Key: HBASE-22133
> URL: https://issues.apache.org/jira/browse/HBASE-22133
> Project: HBase
>  Issue Type: Bug
>  Components: UI
>Reporter: Duo Zhang
>Assignee: Duo Zhang
>Priority: Major
> Fix For: 3.0.0, 2.2.0, 2.3.0
>
> Attachments: HBASE-22133-v1.patch, HBASE-22133.patch
>
>
> The RIT procedure has been changed for branch-2.2+ so we can not use the 
> patch for branch-2.1 directly.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809366#comment-16809366
 ] 

Sean Mackrory edited comment on HBASE-22149 at 4/3/19 11:39 PM:


HBASE-22149-hbase.patch is my proof-of-concept ported into the HBase code base. 
I've address all of Vladimir's feedback so far, but not Wellington's. I did get 
the tests running, although they still require you to add an S3 URI and S3 
credentials to src/test/resources/auth-keys.xml.

I tried several candidates for mocking S3 today. adobe/S3Mock requires 
overriding the actual S3 client used by s3a, which is not a publicly exposed 
interface right now. It could be exposed. I also hit what appears to be some 
conflicting HTTP library versions. findify/s3mock requires a Scala dependency 
(which is banned - not sure if we can work around that since it's only required 
in the test scope), but more seriously it doesn't support FS-style S3 keys. It 
documents that it won't work with the local filesystem backend, but I had 
problems with the in-memory backend as well any time directories were involved. 
S3Proxy is my current favorite and I'm going to work on it some more tomorrow. 
It fails when you use headers it doesn't support, but I want to see if we can 
work around that by disabling unnecessary features in S3A or by modifying 
S3Proxy to proceed in the presence of unknown headers and just ignore them.


was (Author: mackrorysd):
HBASE-22149-hbase.patch is my proof-of-concept ported into the HBase code base. 
I've address all of Vladimir's feedback so far, but not Wellington's. I did get 
the tests running, although they still require you to add an S3 URI and S3 
credentials to src/test/resources/auth-keys.xml.

I tried several candidates for mocking S3 today. adobe/S3Mock requires 
overriding the actual S3 client used by s3a, which is not a publicly exposed 
interface right now. It could be exposed. I also hit what appears to be some 
conflicting HTTP library versions. findify/s3mock requires a Scala dependency 
(which is banned - not sure if we can work around that since it's only required 
in the test scope), but more seriously it doesn't support FS-style S3 keys. It 
documents that it won't work with the local filesystem backend, but I had 
problems with the in-memory backend as well any time directories were involved. 
S3Proxy is my current favorite and I'm going to work on it some more tomorrow. 
It fails when you use headers it doesn't support, but I want to see if we can 
work around that by disabling unnecessary features in S3A or by modifying 
S3Proxy to proceed in the presence of unknown headers and just ignore them.
  - Need to work around this more?

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file c

[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809366#comment-16809366
 ] 

Sean Mackrory commented on HBASE-22149:
---

HBASE-22149-hbase.patch is my proof-of-concept ported into the HBase code base. 
I've address all of Vladimir's feedback so far, but not Wellington's. I did get 
the tests running, although they still require you to add an S3 URI and S3 
credentials to src/test/resources/auth-keys.xml.

I tried several candidates for mocking S3 today. adobe/S3Mock requires 
overriding the actual S3 client used by s3a, which is not a publicly exposed 
interface right now. It could be exposed. I also hit what appears to be some 
conflicting HTTP library versions. findify/s3mock requires a Scala dependency 
(which is banned - not sure if we can work around that since it's only required 
in the test scope), but more seriously it doesn't support FS-style S3 keys. It 
documents that it won't work with the local filesystem backend, but I had 
problems with the in-memory backend as well any time directories were involved. 
S3Proxy is my current favorite and I'm going to work on it some more tomorrow. 
It fails when you use headers it doesn't support, but I want to see if we can 
work around that by disabling unnecessary features in S3A or by modifying 
S3Proxy to proceed in the presence of unknown headers and just ignore them.
  - Need to work around this more?

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency 

[jira] [Updated] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HBASE-22149:
--
Attachment: HBASE-22149-hbase.patch

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch, HBASE-22149-hbase.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HBASE-20911) correct Swtich/case indentation in formatter template for eclipse

2019-04-03 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-20911.

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.2.1
   1.2.12
   1.5.1
   2.3.0
   1.3.4
   1.4.10
   3.0.0

> correct Swtich/case indentation in formatter template for eclipse
> -
>
> Key: HBASE-20911
> URL: https://issues.apache.org/jira/browse/HBASE-20911
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 3.0.0, 1.4.10, 1.3.4, 2.3.0, 1.5.1, 1.2.12, 2.2.1
>
> Attachments: HBASE-20911.patch, HBASE-20911_v1.patch
>
>
> Making it consistent with our checkstyle requirments.
> {code}
>  
>   
>   **
>   
>   
>   
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-22155:
--
Attachment: HBASE-22155-branch-2.2-001.patch

> Move 2.2.0 on to hbase-thirdparty-2.2.0
> ---
>
> Key: HBASE-22155
> URL: https://issues.apache.org/jira/browse/HBASE-22155
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155.branch-2.2.001.patch
>
>
> hbase-thirdparty-2.2.0 was just released. The 2.2.0 RM ([~zghaobac]) gave his 
> blessing in parent issue that 2.2.0 should use thirdparty 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809364#comment-16809364
 ] 

stack commented on HBASE-22155:
---

Tests timing out on linux... Can see tests here 
https://builds.apache.org/job/PreCommit-HBASE-Build/16632/testReport/  Let me 
see if can repro on a linux machine.

> Move 2.2.0 on to hbase-thirdparty-2.2.0
> ---
>
> Key: HBASE-22155
> URL: https://issues.apache.org/jira/browse/HBASE-22155
> Project: HBase
>  Issue Type: Sub-task
>  Components: thirdparty
>Reporter: stack
>Assignee: stack
>Priority: Major
> Fix For: 3.0.0, 2.2.0
>
> Attachments: HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155-branch-2.2-001.patch, HBASE-22155-branch-2.2-001.patch, 
> HBASE-22155.branch-2.2.001.patch
>
>
> hbase-thirdparty-2.2.0 was just released. The 2.2.0 RM ([~zghaobac]) gave his 
> blessing in parent issue that 2.2.0 should use thirdparty 2.2.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809363#comment-16809363
 ] 

Hadoop QA commented on HBASE-22114:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:red}-1{color} | {color:red} hbaseanti {color} | {color:red}  0m  
0s{color} | {color:red} The patch appears use Hadoop classification instead of 
HBase. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
51s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
45s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  7m 
22s{color} | {color:green} branch-1 passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
16s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
37s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
44s{color} | {color:green} branch-1 passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
34s{color} | {color:green} branch-1 passed with JDK v1.7.0_211 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed with JDK v1.8.0_202 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_211 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
 9s{color} | {color:green} The patch passed checkstyle in hbase-resource-bundle 
{color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} The patch passed checkstyle in hbase-common {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
27s{color} | {color:green} hbase-server: The patch generated 0 new + 84 
unchanged - 12 fixed = 84 total (was 96) {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} The patch passed checkstyle in hbase-it {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  5m 
12s{color} | {color:green} root: The patch generated 0 new + 93 unchanged - 12 
fixed = 93 total (was 105) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hbase-tinylfu-blockcache: The patch generated 1 new + 
0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} xml {color} | {color:red}  0m  1s{color} | 
{color:red} The patch has 5 ill-formed XML file(s). {color} |
| {color:blue}0{color

[jira] [Commented] (HBASE-20912) Add import order config in dev support for eclipse

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809362#comment-16809362
 ] 

Andrew Purtell commented on HBASE-20912:


[~an...@apache.org] any idea how to incorporate this into the formatter 
(dev-support/hbase_eclipse_formatter.xml)?

> Add import order config in dev support for eclipse
> --
>
> Key: HBASE-20912
> URL: https://issues.apache.org/jira/browse/HBASE-20912
> Project: HBase
>  Issue Type: Bug
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: eclipse.importorder
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-22144:
---
Attachment: HBASE-22144.002.patch

> MultiRowRangeFilter does not work with reversed scans
> -
>
> Key: HBASE-22144
> URL: https://issues.apache.org/jira/browse/HBASE-22144
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch
>
>
> It appears that MultiRowRangeFilter was never written to function with 
> reverse scans. There is too much logic that operates with the assumption that 
> we are always moving "forward" through increasing ranges. It needs to be 
> rewritten to "traverse" forward or backward, given how the context of the 
> scan being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22144) MultiRowRangeFilter does not work with reversed scans

2019-04-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809334#comment-16809334
 ] 

Josh Elser commented on HBASE-22144:


hbase-rest was using one of the methods I had removed (because I thought no one 
was using it). v2 fixes that.

> MultiRowRangeFilter does not work with reversed scans
> -
>
> Key: HBASE-22144
> URL: https://issues.apache.org/jira/browse/HBASE-22144
> Project: HBase
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Attachments: HBASE-22144.001.patch, HBASE-22144.002.patch
>
>
> It appears that MultiRowRangeFilter was never written to function with 
> reverse scans. There is too much logic that operates with the assumption that 
> we are always moving "forward" through increasing ranges. It needs to be 
> rewritten to "traverse" forward or backward, given how the context of the 
> scan being used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21048) Get LogLevel is not working from console in secure environment

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809323#comment-16809323
 ] 

Hadoop QA commented on HBASE-21048:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
48s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
42s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} hbase-http generated 0 new + 17 unchanged - 2 fixed 
= 17 total (was 19) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} hbase-http: The patch generated 1 new + 4 unchanged - 
4 fixed = 5 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 3s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
7m 54s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
15s{color} | {color:green} hbase-http in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 21s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:b002b0b |
| JIRA Issue | HBASE-21048 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12954927/HBASE-21048.master.004.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  findbugs  hbaseanti  checkstyle  |
| uname | Linux 36c4150e99d1 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / a3110afcda |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16636/artifact/patchprocess/diff-checkstyle-hbase-http.txt
 |
|  Test Re

[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809322#comment-16809322
 ] 

Sakthi commented on HBASE-22086:


For sure. This was my next approach, seeing, the amount of deletes we would 
keep on doing when there are not much changes [removal of snapshot-wise]. I'll 
modify it accordingly to do selective deletes. Thanks for your quick review 
[~elserj].

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-22086:
--

Assignee: Sakthi  (was: Josh Elser)

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-22086:
---
Status: Patch Available  (was: In Progress)

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Josh Elser
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809317#comment-16809317
 ] 

Josh Elser commented on HBASE-22086:


{code:java}
+// Remove old table snapshots data
+removeExistingTableSnapshotSizes();
+
 // For each table, compute the size of each snapshot
 Map namespaceSnapshotSizes = 
computeSnapshotSizes(snapshotsToComputeSize);
 
+// Remove old namespace snapshots data
+removeExistingNamespaceSnapshotSizes();
+
 // Write the size data by namespaces to the quota table.
 // We need to do this "globally" since each FileArchiverNotifier is 
limited to its own Table.
 persistSnapshotSizesForNamespaces(namespaceSnapshotSizes);
{code}
My only concern here is that, with an otherwise stable system, we'll be 
creating load on the system, deleting and then re-writing the same 
SpaceQuotaSnapshot.

Instead of deleting all SpaceQuotaSnapshots in the quota table, could you 
submit deletions only for the HBase snapshots which we didn't compute a new 
SpaceQuotaSnapshot for?

Sorry I missed your message last week on the approach. I think your 
explanations make sense. Looking back at the code makes me agree with you – I 
can't come up with something that wouldn't work.

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned HBASE-22086:
--

Assignee: Josh Elser  (was: Sakthi)

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Josh Elser
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22155) Move 2.2.0 on to hbase-thirdparty-2.2.0

2019-04-03 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809305#comment-16809305
 ] 

Hadoop QA commented on HBASE-22155:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:orange}-0{color} | {color:orange} test4tests {color} | {color:orange}  
0m  0s{color} | {color:orange} The patch doesn't appear to include any new or 
modified tests. Please justify why no new tests are needed for this patch. Also 
please list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2.2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
59s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
27s{color} | {color:green} branch-2.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
46s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
35s{color} | {color:green} branch-2.2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 34s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}296m 20s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}348m 16s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestFromClientSide3 |
|   | hadoop.hbase.client.TestSnapshotTemporaryDirectoryWithRegionReplicas |
|   | hadoop.hbase.client.TestSnapshotDFSTemporaryDirectory |
|   | hadoop.hbase.master.procedure.TestSCPWithReplicas |
|   | hadoop.hbase.client.TestAdmin1 |
|   | hadoop.hbase.client.TestFromClientSide |
|   | hadoop.hbase.client.TestFromClientSideWithCoprocessor |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:42ca976 |
| JIRA Issue | HBASE-22155 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12964726/HBASE-22155-branch-2.2-001.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  shadedjars  
hadoopcheck  xml  compile  |
| uname | Linux 1eafbe1347fa 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-2.2 / d575982b42 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16632/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16632/testReport/ |
| Max. process+thread count | 5262 (vs. ulimit of 1) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/16632/console |
| Powered by | Apache Yetus 0.8.0

[jira] [Updated] (HBASE-21048) Get LogLevel is not working from console in secure environment

2019-04-03 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21048:

Status: Open  (was: Patch Available)

> Get LogLevel is not working from console in secure environment
> --
>
> Key: HBASE-21048
> URL: https://issues.apache.org/jira/browse/HBASE-21048
> Project: HBase
>  Issue Type: Bug
>Reporter: Chandra Sekhar
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-21048.001.patch, HBASE-21048.master.001.patch, 
> HBASE-21048.master.002.patch, HBASE-21048.master.003.patch, 
> HBASE-21048.master.004.patch
>
>
> When we try to get log level of specific package in secure environment, 
> getting SocketException.
> {code:java}
> hbase/master/bin# ./hbase org.apache.hadoop.hbase.http.log.LogLevel -getlevel 
> host-:16010 org.apache.hadoop.hbase
> Connecting to http://host-:16010/logLevel?log=org.apache.hadoop.hbase
> java.net.SocketException: Unexpected end of file from server
> {code}
> It is trying to connect http instead of https 
> code snippet that handling only http in *LogLevel.java*
> {code:java}
>  public static void main(String[] args) {
> if (args.length == 3 && "-getlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]);
>   return;
> }
> else if (args.length == 4 && "-setlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]
>   + "&level=" + args[3]);
>   return;
> }
> System.err.println(USAGES);
> System.exit(-1);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21048) Get LogLevel is not working from console in secure environment

2019-04-03 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-21048:

Status: Patch Available  (was: Open)

> Get LogLevel is not working from console in secure environment
> --
>
> Key: HBASE-21048
> URL: https://issues.apache.org/jira/browse/HBASE-21048
> Project: HBase
>  Issue Type: Bug
>Reporter: Chandra Sekhar
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-21048.001.patch, HBASE-21048.master.001.patch, 
> HBASE-21048.master.002.patch, HBASE-21048.master.003.patch, 
> HBASE-21048.master.004.patch
>
>
> When we try to get log level of specific package in secure environment, 
> getting SocketException.
> {code:java}
> hbase/master/bin# ./hbase org.apache.hadoop.hbase.http.log.LogLevel -getlevel 
> host-:16010 org.apache.hadoop.hbase
> Connecting to http://host-:16010/logLevel?log=org.apache.hadoop.hbase
> java.net.SocketException: Unexpected end of file from server
> {code}
> It is trying to connect http instead of https 
> code snippet that handling only http in *LogLevel.java*
> {code:java}
>  public static void main(String[] args) {
> if (args.length == 3 && "-getlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]);
>   return;
> }
> else if (args.length == 4 && "-setlevel".equals(args[0])) {
>   process("http://"; + args[1] + "/logLevel?log=" + args[2]
>   + "&level=" + args[3]);
>   return;
> }
> System.err.println(USAGES);
> System.exit(-1);
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Sakthi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809282#comment-16809282
 ] 

Sakthi commented on HBASE-22086:


I have put up my initial patch here [~elserj]. Do you mind taking a look, 
please? 

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22086) space quota issue: deleting snapshot doesn't update the usage of table

2019-04-03 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sakthi updated HBASE-22086:
---
Attachment: hbase-22086.master.001.patch

> space quota issue: deleting snapshot doesn't update the usage of table
> --
>
> Key: HBASE-22086
> URL: https://issues.apache.org/jira/browse/HBASE-22086
> Project: HBase
>  Issue Type: Bug
>Reporter: Ajeet Rai
>Assignee: Sakthi
>Priority: Minor
> Attachments: hbase-22086.master.001.patch
>
>
> space quota issue: deleting snapshot doesn't update the usage of table
> Steps: 1:
> set_quota TYPE => SPACE, TABLE => 'bugatti', LIMIT => '7M', POLICY => 
> NO_WRITES_COMPACTIONS
> 2: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 3: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 4: snapshot 'bugatti','bugatti_snapshot'
> 5: ./hbase pe --table="bugatti" --nomapred --rows=200 sequentialWrite 10
> 6: major_compact 'bugatti'
> 7: delete_snapshot 'bugatti_snapshot'
> now check the usage and observe that it is not getting updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21688) Address WAL filesystem issues

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21688:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I've gone ahead and committed the addendum to master, as well as the 2.x 
backports.

I took the liberty of fixing the checkstyle issues that were called out in the 
QA report. Please take a look for those next time, Vlad. Thanks for these 
patches.

> Address WAL filesystem issues
> -
>
> Key: HBASE-21688
> URL: https://issues.apache.org/jira/browse/HBASE-21688
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: s3
> Fix For: 3.0.0, 2.2.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-21688-branch-2.0-v1.patch, 
> HBASE-21688-branch-2.1-v2.patch, HBASE-21688-branch-2.2-v1.patch, 
> HBASE-21688-master-addendum.patch, HBASE-21688-v1.patch
>
>
> Scan and fix code base to use new way of instantiating WAL File System. 
> https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21688) Address WAL filesystem issues

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21688:
---
Fix Version/s: 2.1.5
   2.0.6
   2.2.0

> Address WAL filesystem issues
> -
>
> Key: HBASE-21688
> URL: https://issues.apache.org/jira/browse/HBASE-21688
> Project: HBase
>  Issue Type: Bug
>  Components: Filesystem Integration, wal
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>  Labels: s3
> Fix For: 3.0.0, 2.2.0, 2.0.6, 2.1.5
>
> Attachments: HBASE-21688-branch-2.0-v1.patch, 
> HBASE-21688-branch-2.1-v2.patch, HBASE-21688-branch-2.2-v1.patch, 
> HBASE-21688-master-addendum.patch, HBASE-21688-v1.patch
>
>
> Scan and fix code base to use new way of instantiating WAL File System. 
> https://issues.apache.org/jira/browse/HBASE-21457?focusedCommentId=16734688&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16734688



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-21456) Make WALFactory only used for creating WALProviders

2019-04-03 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-21456:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Looks good to me, Ankit. I've pushed this to the feature branch.

> Make WALFactory only used for creating WALProviders
> ---
>
> Key: HBASE-21456
> URL: https://issues.apache.org/jira/browse/HBASE-21456
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: HBASE-20952
>Reporter: Josh Elser
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: HBASE-20952
>
> Attachments: HBASE-21456.HBASE-20952.001.patch, 
> HBASE-21456.HBASE-20952.002.patch, HBASE-21456.HBASE-20952.003.patch, 
> HBASE-21456.HBASE-20952.wip.patch
>
>
> As a Factory, WALFactory should only have one job: creating instances of 
> WALProvider.
> However, over the years, it has been a landing place for lots of wal-related 
> methods (e.g. constructing readers, WALEntryStream, and more). We want all of 
> this to live in the WALProvider.
> We can do this in two steps: have the WALFactory methods invoke the method on 
> the WALProvider (handled elsewhere), then here we can replace usage of the 
> WALFactory "wrapper methods" with the WALProvider itself.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22156) Apply for translation of the Chinese version, I hope to get authorization!

2019-04-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809247#comment-16809247
 ] 

Josh Elser commented on HBASE-22156:


Hi Yifan,

Thanks for taking up this work. I am supportive of you offering the resources 
to help with Chinese localization of the project's documentation.

I am worried about doing this the correct way. It is important that we ensure 
there is no confusion with respect to branding – Apache, HBase, and Apache 
HBase are trademarks of the Apache Software Foundation, available free of 
charge to all individuals. I notice in your preview that you have added an 
advertisement, presumably which you are monetizing.

If you intend to go forward with this localization, I'd like to make sure we 
are doing it on hbase.apache.org.

> Apply for translation of the Chinese version, I hope to get authorization! 
> ---
>
> Key: HBASE-22156
> URL: https://issues.apache.org/jira/browse/HBASE-22156
> Project: HBase
>  Issue Type: Wish
>Reporter: Yuan Yifan
>Priority: Minor
>
> Hello everyone, we are [ApacheCN|https://www.apachecn.org/], an open-source 
> community in China, focusing on Big Data and AI.
> Recently, we have been making progress on translating HBase documents.
>  - [Source Of Document|https://github.com/apachecn/hbase-doc-zh]
>  - [Document Preview|http://hbase.apachecn.org/]
> There are several reasons:
>  *1. The English level of many Chinese users is not very good.*
>  *2. Network problems, you know (China's magic network)!*
>  *3. Online blogs are very messy.*
> We are very willing to do some Chinese localization for your project. If 
> possible, please give us some authorization.
> Yifan Yuan from Apache CN
> You may contact me by mail [tsingjyuj...@163.com|mailto:tsingjyuj...@163.com] 
> for more details



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22143) HBCK2 setRegionState command

2019-04-03 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809238#comment-16809238
 ] 

Josh Elser commented on HBASE-22143:


Why the choice to have the user pass in a region path in the filesystem instead 
of just the encoded name? Seems like you might have started out accepting an 
encoded region name given the comment:
{code:java}
+writer.println("   An example setting region 
'de00010733901a05f5a2a3a382e27dd4' to CLOSING:");
+writer.println(" $ HBCK2 setRegionState 
de00010733901a05f5a2a3a382e27dd4 CLOSING");
{code}
How does this fail if I give you a region name that is bogus? Could you add a 
test for that?

Finally, can you think of a situation where we'd want to ever move a region to 
a state that isn't "CLOSED"? Do we want to give the operator the ability to 
push to any state? I would be worried any of the "in-transit" states (e..g 
opening, closing) are just ways folks can shoot themselves in the foot.

Thanks!

> HBCK2 setRegionState command
> 
>
> Key: HBASE-22143
> URL: https://issues.apache.org/jira/browse/HBASE-22143
> Project: HBase
>  Issue Type: New Feature
>  Components: hbase-operator-tools, hbck2
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Minor
> Attachments: HBASE-22143.master.0001.patch
>
>
> Among some of the current AMv2 issues, we faced situation where some regions 
> had state as OPENING in meta, with an RS startcode that was not valid 
> anymore. There was no AP running, the region stays permanently being logged 
> as IN-Transition on master logs, yet no procedure is really trying to bring 
> it online. Current hbck2 unassigns/assigns commands didn't work either, as 
> per the exception shown, it expects regions to be in state SPLITTING, SPLIT, 
> MERGING, OPEN, or CLOSING:
> {noformat}
> WARN org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure: 
> Failed transition, suspend 1secs pid=7093, 
> state=RUNNABLE:REGION_TRANSITION_DISPATCH, locked=true; UnassignProcedure 
> table=rc_accounts, region=db85127b77fa56f7ad44e2c988e53925, 
> server=server1.example.com,16020,1552682193324; rit=OPENING, 
> location=server1.example.com,16020,1552682193324; waiting on rectified 
> condition fixed by other Procedure or operator intervention
> org.apache.hadoop.hbase.exceptions.UnexpectedStateException: Expected 
> [SPLITTING, SPLIT, MERGING, OPEN, CLOSING] so could move to CLOSING but 
> current state=OPENING
> at 
> org.apache.hadoop.hbase.master.assignment.RegionStates$RegionStateNode.transitionState(RegionStates.java:166)
> at 
> org.apache.hadoop.hbase.master.assignment.AssignmentManager.markRegionAsClosing(AssignmentManager.java:1479)
> at 
> org.apache.hadoop.hbase.master.assignment.UnassignProcedure.updateTransition(UnassignProcedure.java:212)
> at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:369)
> at 
> org.apache.hadoop.hbase.master.assignment.RegionTransitionProcedure.execute(RegionTransitionProcedure.java:97)
> at org.apache.hadoop.hbase.procedure2.Procedure.doExecute(Procedure.java:957)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.execProcedure(ProcedureExecutor.java:1835)
> at 
> org.apache.hadoop.hbase.procedure2.ProcedureExecutor.executeProcedure(ProcedureExecutor.java:1595){noformat}
> In this specific case, since we know the region is not actually being 
> operated by any proc and is not really open anywhere, it's ok to manually set 
> it's state to one of those assigns/unassigns can operate on, so this jira 
> proposes a new hbck2 command that allows for arbitrarily set a region to a 
> given state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Ben Manes (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809144#comment-16809144
 ] 

Ben Manes commented on HBASE-15560:
---

That's wonderful, thank you.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, 
> run_ycsb_c.sh, run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809136#comment-16809136
 ] 

Andrew Purtell commented on HBASE-15560:


Good news is I've been testing TinyLFU under load in a small cluster, with and 
without chaos, and it's stable.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, 
> run_ycsb_c.sh, run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809135#comment-16809135
 ] 

Andrew Purtell commented on HBASE-22114:


Good news is I've been testing TinyLFU under load in a small cluster, with and 
without chaos, and it's stable.

> Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1
> ---
>
> Key: HBASE-22114
> URL: https://issues.apache.org/jira/browse/HBASE-22114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: HBASE-22114-branch-1.patch, HBASE-22114-branch-1.patch, 
> HBASE-22114-branch-1.patch
>
>
> HBASE-15560 introduces the TinyLFU cache policy for the blockcache.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> The implementation of HBASE-15560 uses several Java 8 idioms, depends on JRE 
> 8+ type Optional, and has dependencies on libraries compiled with Java 8+ 
> bytecode. It could be backported to branch-1 but must be made optional both 
> at compile time and runtime, enabled by the 'build-with-jdk8' build profile.
> The TinyLFU policy must go into its own build module.
> The blockcache must be modified to load L1 implementation/policy dynamically 
> at startup by reflection if the policy is "TinyLFU"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809133#comment-16809133
 ] 

Sean Mackrory commented on HBASE-22149:
---

{quote}Also shouldn't be the case for hbase, given every path under the rename 
would already be owned by hbase user.{quote}
Yeah - I think for v1 at least we can just document a need to have consistent 
access permissions. Of course since the ACLs are external it's a little less 
trivial for us to validate and fix, so I'll keep it on my wish-list to have 
more robust renames...

{quote}maybe just mock S3 FS implementation{quote}
Yeah I'm actually playing around with switching the tests over to that. When 
porting this module into the HBase codebase I had some other issues with tests. 
I'll post another patch with all the feedback so far when I've gotten further 
with the.

Your other 2 points are good ones and I'll incorporate them.

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809131#comment-16809131
 ] 

Sean Mackrory commented on HBASE-22149:
---

{quote}S3Guard must still be enabled{quote}
{quote}Why not mirror instead to a hierarchy of znodes?{quote}

Indeed, S3Guard is required as well. S3Guard is entirely pluggable (like this, 
we have Null and Local implementations in addition to the Dynamo one), so a 
ZooKeeper implementation is quite feasible as well. I actually suggested as 
much in the early days of S3Guard since a ZooKeeper ensemble is a de-facto 
requirement for Hadoop already, but nothing happened for performance reasons. 
If you think the slower metadata lookups wouldn't be a problem for HBase, 
that's worth looking into.

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to provide the required performance 
> characteristics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22114:
---
Attachment: HBASE-22114-branch-1.patch

> Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1
> ---
>
> Key: HBASE-22114
> URL: https://issues.apache.org/jira/browse/HBASE-22114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: HBASE-22114-branch-1.patch, HBASE-22114-branch-1.patch, 
> HBASE-22114-branch-1.patch
>
>
> HBASE-15560 introduces the TinyLFU cache policy for the blockcache.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> The implementation of HBASE-15560 uses several Java 8 idioms, depends on JRE 
> 8+ type Optional, and has dependencies on libraries compiled with Java 8+ 
> bytecode. It could be backported to branch-1 but must be made optional both 
> at compile time and runtime, enabled by the 'build-with-jdk8' build profile.
> The TinyLFU policy must go into its own build module.
> The blockcache must be modified to load L1 implementation/policy dynamically 
> at startup by reflection if the policy is "TinyLFU"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-15560:
---
Attachment: HBASE-15560.patch

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, 
> run_ycsb_c.sh, run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809117#comment-16809117
 ] 

Andrew Purtell commented on HBASE-22114:


Need to downgrade to caffeine 2.6.2 per parent. Let me do that and post an 
update shortly.

> Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1
> ---
>
> Key: HBASE-22114
> URL: https://issues.apache.org/jira/browse/HBASE-22114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: HBASE-22114-branch-1.patch, HBASE-22114-branch-1.patch
>
>
> HBASE-15560 introduces the TinyLFU cache policy for the blockcache.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> The implementation of HBASE-15560 uses several Java 8 idioms, depends on JRE 
> 8+ type Optional, and has dependencies on libraries compiled with Java 8+ 
> bytecode. It could be backported to branch-1 but must be made optional both 
> at compile time and runtime, enabled by the 'build-with-jdk8' build profile.
> The TinyLFU policy must go into its own build module.
> The blockcache must be modified to load L1 implementation/policy dynamically 
> at startup by reflection if the policy is "TinyLFU"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809100#comment-16809100
 ] 

Andrew Purtell edited comment on HBASE-15560 at 4/3/19 6:58 PM:


org.checkerframework is not a dependency of this patch, not sure where the 
shaded jars or hadoopcheck issue is coming in from.

The reported javac issue is not from this patch. Some other changes to master 
have broken precommit in this regard. I think the recent error-prone work. 

The checkstyle report only flags ImportOrder. As stated on other issues, no 
matter where I move them this happens. I tried to move them here and got an 
ImportOrder warning upon both attempts. Can we disable ImportOrder warnings, 
please?

I've been contributing to this project for more than ten years and am about 
ready to give up it is so difficult now.


was (Author: apurtell):
org.checkerframework is not a dependency of this patch, not sure where the 
shaded jars or hadoopcheck issue is coming in from.

The reported javac issue is not from this patch.

Basically, some other changes to master have broken precommit.

The checkstyle report only flags ImportOrder. As stated on other issues, no 
matter where I move them this happens. I tried to move them here and got an 
ImportOrder warning upon both attempts. Can we disable ImportOrder warnings, 
please?

I've been contributing to this project for more than ten years and am about 
ready to give up it is so difficult now.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).

[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809112#comment-16809112
 ] 

Andrew Purtell commented on HBASE-15560:


Oh thanks [~ben.manes]. Sorry, I came back to this after an extended context 
switch and forgot that the caffiene version was recently changed. Let me 
downgrade to 2.6.2 and try again.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Ben Manes (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809108#comment-16809108
 ] 

Ben Manes edited comment on HBASE-15560 at 4/3/19 6:56 PM:
---

[~apurtell], I'm sorry for the hassle. In 
[2.7.0|https://github.com/ben-manes/caffeine/releases/tag/v2.7.0] we did 
migrate from JSR-305 annotations to ErrorProne's and CheckerFramework's. This 
was to be compatible Java 9's modules, which doesn't support split packages. I 
had forgotten HBase's need to handle all transitive dependencies explicitly. 
You could downgrade to {{2.6.2}} if this is too much trouble.


was (Author: ben.manes):
[~apurtell], I'm sorry for the hassle. In 
[2.7.0|[https://github.com/ben-manes/caffeine/releases/tag/v2.7.0]] we did 
migrate from JSR-305 annotations to ErrorProne's and CheckerFramework's. This 
was to be compatible Java 9's modules, which doesn't support split packages. I 
had forgotten HBase's need to handle all transitive dependencies explicitly. 
You could downgrade to `2.6.2` if this is too much trouble.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Ben Manes (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809108#comment-16809108
 ] 

Ben Manes commented on HBASE-15560:
---

[~apurtell], I'm sorry for the hassle. In 
[2.7.0|[https://github.com/ben-manes/caffeine/releases/tag/v2.7.0]] we did 
migrate from JSR-305 annotations to ErrorProne's and CheckerFramework's. This 
was to be compatible Java 9's modules, which doesn't support split packages. I 
had forgotten HBase's need to handle all transitive dependencies explicitly. 
You could downgrade to `2.6.2` if this is too much trouble.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809100#comment-16809100
 ] 

Andrew Purtell edited comment on HBASE-15560 at 4/3/19 6:52 PM:


org.checkerframework is not a dependency of this patch, not sure where the 
shaded jars or hadoopcheck issue is coming in from.

The reported javac issue is not from this patch.

Basically, some other changes to master have broken precommit.

The checkstyle report only flags ImportOrder. As stated on other issues, no 
matter where I move them this happens. I tried to move them here and got an 
ImportOrder warning upon both attempts. Can we disable ImportOrder warnings, 
please?

I've been contributing to this project for more than ten years and am about 
ready to give up it is so difficult now.


was (Author: apurtell):
org.checkerframework is not a dependency of this patch, not sure where the 
shaded jars issue is coming in from. 

The reported javac issue is not from this patch.

Basically, some other changes to master have broken precommit.

The checkstyle report only flags ImportOrder. As stated on other issues, no 
matter where I move them this happens. I tried to move them here and got an 
ImportOrder warning upon both attempts. Can we disable ImportOrder warnings, 
please?

I've been contributing to this project for more than ten years and am about 
ready to give up it is so difficult now.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76

[jira] [Commented] (HBASE-15560) TinyLFU-based BlockCache

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-15560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809100#comment-16809100
 ] 

Andrew Purtell commented on HBASE-15560:


org.checkerframework is not a dependency of this patch, not sure where the 
shaded jars issue is coming in from. 

The reported javac issue is not from this patch.

Basically, some other changes to master have broken precommit.

The checkstyle report only flags ImportOrder. As stated on other issues, no 
matter where I move them this happens. I tried to move them here and got an 
ImportOrder warning upon both attempts. Can we disable ImportOrder warnings, 
please?

I've been contributing to this project for more than ten years and am about 
ready to give up it is so difficult now.

> TinyLFU-based BlockCache
> 
>
> Key: HBASE-15560
> URL: https://issues.apache.org/jira/browse/HBASE-15560
> Project: HBase
>  Issue Type: Improvement
>  Components: BlockCache
>Affects Versions: 2.0.0
>Reporter: Ben Manes
>Assignee: Ben Manes
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, HBASE-15560.patch, 
> bc.hit.count, bc.miss.count, branch-1.tinylfu.txt, gets, run_ycsb_c.sh, 
> run_ycsb_loading.sh, tinylfu.patch
>
>
> LruBlockCache uses the Segmented LRU (SLRU) policy to capture frequency and 
> recency of the working set. It achieves concurrency by using an O( n ) 
> background thread to prioritize the entries and evict. Accessing an entry is 
> O(1) by a hash table lookup, recording its logical access time, and setting a 
> frequency flag. A write is performed in O(1) time by updating the hash table 
> and triggering an async eviction thread. This provides ideal concurrency and 
> minimizes the latencies by penalizing the thread instead of the caller. 
> However the policy does not age the frequencies and may not be resilient to 
> various workload patterns.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> Concurrency is achieved by buffering and replaying the operations, similar to 
> a write-ahead log. A read is recorded into a striped ring buffer and writes 
> to a queue. The operations are applied in batches under a try-lock by an 
> asynchronous thread, thereby track the usage pattern without incurring high 
> latencies 
> ([benchmarks|https://github.com/ben-manes/caffeine/wiki/Benchmarks#server-class]).
> In YCSB benchmarks the results were inconclusive. For a large cache (99% hit 
> rates) the two caches have near identical throughput and latencies with 
> LruBlockCache narrowly winning. At medium and small caches, TinyLFU had a 
> 1-4% hit rate improvement and therefore lower latencies. The lack luster 
> result is because a synthetic Zipfian distribution is used, which SLRU 
> performs optimally. In a more varied, real-world workload we'd expect to see 
> improvements by being able to make smarter predictions.
> The provided patch implements BlockCache using the 
> [Caffeine|https://github.com/ben-manes/caffeine] caching library (see 
> HighScalability 
> [article|http://highscalability.com/blog/2016/1/25/design-of-a-modern-cache.html]).
> Edward Bortnikov and Eshcar Hillel have graciously provided guidance for 
> evaluating this patch ([github 
> branch|https://github.com/ben-manes/hbase/tree/tinylfu]).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Andrew Purtell (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-22114:
---
Fix Version/s: (was: 1.5.0)
   1.6.0

> Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1
> ---
>
> Key: HBASE-22114
> URL: https://issues.apache.org/jira/browse/HBASE-22114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.6.0
>
> Attachments: HBASE-22114-branch-1.patch, HBASE-22114-branch-1.patch
>
>
> HBASE-15560 introduces the TinyLFU cache policy for the blockcache.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> The implementation of HBASE-15560 uses several Java 8 idioms, depends on JRE 
> 8+ type Optional, and has dependencies on libraries compiled with Java 8+ 
> bytecode. It could be backported to branch-1 but must be made optional both 
> at compile time and runtime, enabled by the 'build-with-jdk8' build profile.
> The TinyLFU policy must go into its own build module.
> The blockcache must be modified to load L1 implementation/policy dynamically 
> at startup by reflection if the policy is "TinyLFU"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22114) Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809096#comment-16809096
 ] 

Andrew Purtell commented on HBASE-22114:


I was hoping to get this in to the next 1.5.0 RC but that isn't going to happen 
looks like. Stymied by precommit limitations, unless we want to plow ahead and 
disregard them.

> Port HBASE-15560 (TinyLFU-based BlockCache) to branch-1
> ---
>
> Key: HBASE-22114
> URL: https://issues.apache.org/jira/browse/HBASE-22114
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Major
> Fix For: 1.5.0
>
> Attachments: HBASE-22114-branch-1.patch, HBASE-22114-branch-1.patch
>
>
> HBASE-15560 introduces the TinyLFU cache policy for the blockcache.
> W-TinyLFU ([research paper|http://arxiv.org/pdf/1512.00727.pdf]) records the 
> frequency in a counting sketch, ages periodically by halving the counters, 
> and orders entries by SLRU. An entry is discarded by comparing the frequency 
> of the new arrival (candidate) to the SLRU's victim, and keeping the one with 
> the highest frequency. This allows the operations to be performed in O(1) 
> time and, though the use of a compact sketch, a much larger history is 
> retained beyond the current working set. In a variety of real world traces 
> the policy had [near optimal hit 
> rates|https://github.com/ben-manes/caffeine/wiki/Efficiency].
> The implementation of HBASE-15560 uses several Java 8 idioms, depends on JRE 
> 8+ type Optional, and has dependencies on libraries compiled with Java 8+ 
> bytecode. It could be backported to branch-1 but must be made optional both 
> at compile time and runtime, enabled by the 'build-with-jdk8' build profile.
> The TinyLFU policy must go into its own build module.
> The blockcache must be modified to load L1 implementation/policy dynamically 
> at startup by reflection if the policy is "TinyLFU"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16809016#comment-16809016
 ] 

Wellington Chevreuil commented on HBASE-22149:
--

Some thoughts:
 1)
{quote}but there's also a contrived case I'm aware of where you can set up ACLs 
such that the permissions are restricted on a subset of the rename
{quote}
Also shouldn't be the case for hbase, given every path under the rename would 
already be owned by hbase user.

2)
 Seems *TreeLockManager.lockListings(Path[] paths)* can get deadlocked when 
passing list of hierarchical paths, trying to *treeWriteLock* child node of 
previous node in the paths array. Doesn't seem to a call hbase would do anyway, 
but maybe only need to lock parent paths when listing.

3)
 Noticed there's no actual dependency on s3/s3guard actually, maybe just mock 
S3 FS implementation behaviour should suffice.

4) On HBaseObjectStoreSemantics.initialize(), maybe worth reset 
"fs.SCHEMA.impl" back to HBaseObjectStoreSemantics before returning, or 
subsequent clients trying to acquire FS implementation with same Configuration 
instance may not be able to get HBaseObjectStoreSemantics.
{noformat}
  public void initialize(URI name, Configuration conf) throws IOException {
String wrappedImpl = conf.get("fs.hboss.fs." + name.getScheme() + ".impl");
if (wrappedImpl != null) {
  conf.set("fs." + name.getScheme() + ".impl", wrappedImpl);
}
fs = FileSystem.get(name, conf);
sync = TreeLockManager.get(name, conf);
  }
{noformat}

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> i

[jira] [Updated] (HBASE-22161) HBase rest scan silent fail when IOException thrown

2019-04-03 Thread Zhangwei Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhangwei Wu updated HBASE-22161:

Description: 
HBase rest scan may result incomplete data when 
https://issues.apache.org/jira/browse/HBASE-14533 or any other unexpected 
IOException happened in hbase rest service;

in ScannerResultGenerator.java @line 181

 
{code:java}
catch (IOException e) {
LOG.error(StringUtils.stringifyException(e));
}
{code}
when RetriesExhaustedException thrown, it just eat the exception and scan 
complete without any error from client side view. 

 

 

  was:
HBase rest scan may result incomplete data when 
https://issues.apache.org/jira/browse/HBASE-14533 happened in hbase rest 
service;

in ScannerResultGenerator.java @line 181

 
{code:java}
catch (IOException e) {
LOG.error(StringUtils.stringifyException(e));
}
{code}
when RetriesExhaustedException thrown, it just eat the exception and scan 
complete without any error from client side view. 

 

 


> HBase rest scan silent fail when IOException thrown
> ---
>
> Key: HBASE-22161
> URL: https://issues.apache.org/jira/browse/HBASE-22161
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 2.1.4
>Reporter: Zhangwei Wu
>Priority: Major
>
> HBase rest scan may result incomplete data when 
> https://issues.apache.org/jira/browse/HBASE-14533 or any other unexpected 
> IOException happened in hbase rest service;
> in ScannerResultGenerator.java @line 181
>  
> {code:java}
> catch (IOException e) {
> LOG.error(StringUtils.stringifyException(e));
> }
> {code}
> when RetriesExhaustedException thrown, it just eat the exception and scan 
> complete without any error from client side view. 
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22161) HBase rest scan silent fail when IOException thrown

2019-04-03 Thread Zhangwei Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhangwei Wu updated HBASE-22161:

Description: 
HBase rest scan may result incomplete data when 
https://issues.apache.org/jira/browse/HBASE-14533 or any other unexpected 
IOException happened in hbase rest service;

in ScannerResultGenerator.java @line 181

 
{code:java}
catch (IOException e) {
LOG.error(StringUtils.stringifyException(e));
}
{code}
when RetriesExhaustedException thrown, it just eat the exception and scan 
complete without any error from client side view, which result severe business 
impact. 

 

  was:
HBase rest scan may result incomplete data when 
https://issues.apache.org/jira/browse/HBASE-14533 or any other unexpected 
IOException happened in hbase rest service;

in ScannerResultGenerator.java @line 181

 
{code:java}
catch (IOException e) {
LOG.error(StringUtils.stringifyException(e));
}
{code}
when RetriesExhaustedException thrown, it just eat the exception and scan 
complete without any error from client side view. 

 

 


> HBase rest scan silent fail when IOException thrown
> ---
>
> Key: HBASE-22161
> URL: https://issues.apache.org/jira/browse/HBASE-22161
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 2.1.4
>Reporter: Zhangwei Wu
>Priority: Major
>
> HBase rest scan may result incomplete data when 
> https://issues.apache.org/jira/browse/HBASE-14533 or any other unexpected 
> IOException happened in hbase rest service;
> in ScannerResultGenerator.java @line 181
>  
> {code:java}
> catch (IOException e) {
> LOG.error(StringUtils.stringifyException(e));
> }
> {code}
> when RetriesExhaustedException thrown, it just eat the exception and scan 
> complete without any error from client side view, which result severe 
> business impact. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22161) HBase rest scan silent fail when IOException thrown

2019-04-03 Thread Zhangwei Wu (JIRA)
Zhangwei Wu created HBASE-22161:
---

 Summary: HBase rest scan silent fail when IOException thrown
 Key: HBASE-22161
 URL: https://issues.apache.org/jira/browse/HBASE-22161
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 2.1.4
Reporter: Zhangwei Wu


HBase rest scan may result incomplete data when 
https://issues.apache.org/jira/browse/HBASE-14533 happened in hbase rest 
service;

in ScannerResultGenerator.java @line 181

 
{code:java}
catch (IOException e) {
LOG.error(StringUtils.stringifyException(e));
}
{code}
when RetriesExhaustedException thrown, it just eat the exception and scan 
complete without any error from client side view. 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808974#comment-16808974
 ] 

Andrew Purtell edited comment on HBASE-22149 at 4/3/19 5:51 PM:


Some thoughts.

S3Guard must still be enabled. While namespace consistency is orthogonal to 
locking, ideally we can also eliminate the dependency on S3Guard, because this 
impacts the costs-to-serve of the AWS hosted HBase service. Maybe this is 
something that could be addressed here too. S3Guard mirrors the S3 namespace in 
a Dynamo table. Why not mirror instead to a hierarchy of znodes? A downside I 
can see to this approach is the ZooKeeper write scalability limit will impose a 
ceiling on the concurrency of filesystem namespace operations, and ZAB 
broadcast latency will raise the floor for the latency of each change. Dynamo 
will have a very different performance profile. ZooKeeper will be limited to 
the deployed resources; Dynamo scaling is "infinite". Maybe not a problem in 
practice, but something to understand well before any production deploy.

The scalability of the path-based locking approach given HBase's access 
patterns is probably ok. We write relatively rarely, and when we do the writes 
for each region go into their own separate directories. We can expect locks on 
the "directory path" for each region directory. Locks for writes in one region 
are independent of all other locks for all other regions. However, resources 
required for locks and activity related to locking will grow linearly with 
respect to the number of regions. I am concerned about the write scalability of 
the ZooKeeper service when servicing a cluster with a very large number of 
regions. Would be curious to see the results of an experiment to assess this.


was (Author: apurtell):
Some thoughts.

S3Guard must still be enabled. While namespace consistency is orthogonal to 
locking, ideally we can also eliminate the dependency on S3Guard, because this 
impacts the costs-to-serve of the AWS hosted HBase service. Maybe this is 
something that could be addressed here too. S3Guard mirrors the S3 namespace in 
a Dynamo table. Why not mirror instead to a hierarchy of znodes? 

The scalability of the path-based locking approach given HBase's access 
patterns is probably ok. We write relatively rarely, and when we do the writes 
for each region go into their own separate directories. We can expect locks on 
the "directory path" for each region directory. Locks for writes in one region 
are independent of all other locks for all other regions. However, resources 
required for locks and activity related to locking will grow linearly with 
respect to the number of regions. I am concerned about the write scalability of 
the ZooKeeper service when servicing a cluster with a very large number of 
regions. Would be curious to see the results of an experiment to assess this.

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 

[jira] [Commented] (HBASE-22149) HBOSS: A FileSystem implementation to provide HBase's required semantics

2019-04-03 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808974#comment-16808974
 ] 

Andrew Purtell commented on HBASE-22149:


Some thoughts.

S3Guard must still be enabled. While namespace consistency is orthogonal to 
locking, ideally we can also eliminate the dependency on S3Guard, because this 
impacts the costs-to-serve of the AWS hosted HBase service. Maybe this is 
something that could be addressed here too. S3Guard mirrors the S3 namespace in 
a Dynamo table. Why not mirror instead to a hierarchy of znodes? 

The scalability of the path-based locking approach given HBase's access 
patterns is probably ok. We write relatively rarely, and when we do the writes 
for each region go into their own separate directories. We can expect locks on 
the "directory path" for each region directory. Locks for writes in one region 
are independent of all other locks for all other regions. However, resources 
required for locks and activity related to locking will grow linearly with 
respect to the number of regions. I am concerned about the write scalability of 
the ZooKeeper service when servicing a cluster with a very large number of 
regions. Would be curious to see the results of an experiment to assess this.

> HBOSS: A FileSystem implementation to provide HBase's required semantics
> 
>
> Key: HBASE-22149
> URL: https://issues.apache.org/jira/browse/HBASE-22149
> Project: HBase
>  Issue Type: New Feature
>  Components: Filesystem Integration
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HBASE-22149-hadoop.patch
>
>
> (Have been using the name HBOSS for HBase / Object Store Semantics)
> I've had some thoughts about how to solve the problem of running HBase on 
> object stores. There has been some thought in the past about adding the 
> required semantics to S3Guard, but I have some concerns about that. First, 
> it's mixing complicated solutions to different problems (bridging the gap 
> between a flat namespace and a hierarchical namespace vs. solving 
> inconsistency). Second, it's S3-specific, whereas other objects stores could 
> use virtually identical solutions. And third, we can't do things like atomic 
> renames in a true sense. There would have to be some trade-offs specific to 
> HBase's needs and it's better if we can solve that in an HBase-specific 
> module without mixing all that logic in with the rest of S3A.
> Ideas to solve this above the FileSystem layer have been proposed and 
> considered (HBASE-20431, for one), and maybe that's the right way forward 
> long-term, but it certainly seems to be a hard problem and hasn't been done 
> yet. But I don't know enough of all the internal considerations to make much 
> of a judgment on that myself.
> I propose a FileSystem implementation that wraps another FileSystem instance 
> and provides locking of FileSystem operations to ensure correct semantics. 
> Locking could quite possibly be done on the same ZooKeeper ensemble as an 
> HBase cluster already uses (I'm sure there are some performance 
> considerations here that deserve more attention). I've put together a 
> proof-of-concept on which I've tested some aspects of atomic renames and 
> atomic file creates. Both of these tests fail reliably on a naked s3a 
> instance. I've also done a small YCSB run against a small cluster to sanity 
> check other functionality and was successful. I will post the patch, and my 
> laundry list of things that still need work. The WAL is still placed on HDFS, 
> but the HBase root directory is otherwise on S3.
> Note that my prototype is built on Hadoop's source tree right now. That's 
> purely for my convenience in putting it together quickly, as that's where I 
> mostly work. I actually think long-term, if this is accepted as a good 
> solution, it makes sense to live in HBase (or it's own repository). It only 
> depends on stable, public APIs in Hadoop and is targeted entirely at HBase's 
> needs, so it should be able to iterate on the HBase community's terms alone.
> Another idea [~ste...@apache.org] proposed to me is that of an inode-based 
> FileSystem that keeps hierarchical metadata in a more appropriate store that 
> would allow the required transactions (maybe a special table in HBase could 
> provide that store itself for other tables), and stores the underlying files 
> with unique identifiers on S3. This allows renames to actually become fast 
> instead of just large atomic operations. It does however place a strong 
> dependency on the metadata store. I have not explored this idea much. My 
> current proof-of-concept has been pleasantly simple, so I think it's the 
> right solution unless it proves unable to

  1   2   >