[jira] [Commented] (HBASE-6721) RegionServer Group based Assignment

2015-12-10 Thread Francis Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052376#comment-15052376
 ] 

Francis Liu commented on HBASE-6721:


[~tedyu] Looks like fixing the normalizer made the core tests pass, I'm not 
able to reproduce the shell failure you presented?

> RegionServer Group based Assignment
> ---
>
> Key: HBASE-6721
> URL: https://issues.apache.org/jira/browse/HBASE-6721
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
>  Labels: hbase-6721
> Attachments: 6721-master-webUI.patch, HBASE-6721 
> GroupBasedLoadBalancer Sequence Diagram.xml, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, HBASE-6721-DesigDoc.pdf, 
> HBASE-6721_0.98_2.patch, HBASE-6721_10.patch, HBASE-6721_11.patch, 
> HBASE-6721_12.patch, HBASE-6721_13.patch, HBASE-6721_14.patch, 
> HBASE-6721_15.patch, HBASE-6721_8.patch, HBASE-6721_9.patch, 
> HBASE-6721_9.patch, HBASE-6721_94.patch, HBASE-6721_94.patch, 
> HBASE-6721_94_2.patch, HBASE-6721_94_3.patch, HBASE-6721_94_3.patch, 
> HBASE-6721_94_4.patch, HBASE-6721_94_5.patch, HBASE-6721_94_6.patch, 
> HBASE-6721_94_7.patch, HBASE-6721_98_1.patch, HBASE-6721_98_2.patch, 
> HBASE-6721_hbase-6721_addendum.patch, HBASE-6721_trunk.patch, 
> HBASE-6721_trunk.patch, HBASE-6721_trunk.patch, HBASE-6721_trunk1.patch, 
> HBASE-6721_trunk2.patch, balanceCluster Sequence Diagram.svg, 
> hbase-6721-v15-branch-1.1.patch, hbase-6721-v16.patch, hbase-6721-v17.patch, 
> hbase-6721-v18.patch, hbase-6721-v19.patch, immediateAssignments Sequence 
> Diagram.svg, randomAssignment Sequence Diagram.svg, retainAssignment Sequence 
> Diagram.svg, roundRobinAssignment Sequence Diagram.svg
>
>
> In multi-tenant deployments of HBase, it is likely that a RegionServer will 
> be serving out regions from a number of different tables owned by various 
> client applications. Being able to group a subset of running RegionServers 
> and assign specific tables to it, provides a client application a level of 
> isolation and resource allocation.
> The proposal essentially is to have an AssignmentManager which is aware of 
> RegionServer groups and assigns tables to region servers based on groupings. 
> Load balancing will occur on a per group basis as well. 
> This is essentially a simplification of the approach taken in HBASE-4120. See 
> attached document.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-10 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052369#comment-15052369
 ] 

Appy commented on HBASE-14769:
--

Thanks [~stack] for committing the patch. :-)

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.
> Link to discussion, if it was okay to remove the functions: 
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201512.mbox/%3CCAAjhxrovmK8AYQBA9YJJYBEgTZamav4nOtzrcWsdUiisX69qMA%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-10 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14769:
-
Description: 
HBaseAdmin is marked private, so removing the functions not being used anywhere.
Also, the javadocs of overridden functions are same as corresponding ones in 
Admin.java. Since javadocs are automatically inherited from the interface 
class, we can remove these redundant 100s of lines.

Link to discussion, if it was okay to remove the functions: 
http://mail-archives.apache.org/mod_mbox/hbase-dev/201512.mbox/%3CCAAjhxrovmK8AYQBA9YJJYBEgTZamav4nOtzrcWsdUiisX69qMA%40mail.gmail.com%3E

  was:
HBaseAdmin is marked private, so removing the functions not being used anywhere.
Also, the javadocs of overridden functions are same as corresponding ones in 
Admin.java. Since javadocs are automatically inherited from the interface 
class, we can remove these redundant 100s of lines.


> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.
> Link to discussion, if it was okay to remove the functions: 
> http://mail-archives.apache.org/mod_mbox/hbase-dev/201512.mbox/%3CCAAjhxrovmK8AYQBA9YJJYBEgTZamav4nOtzrcWsdUiisX69qMA%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-10 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14769:
-
Release Note: 
- Removes functions from HBaseAdmin which require table name parameter as 
either byte[] or String. Use their counterparts which take TableName instead.
- Removes redundant javadocs from HBaseAdmin as they will be automatically 
inherited from Admin interface.
- HBaseAdmin is marked Audience.private so it should have been straight forward 
okay to remove the functions. But HBaseTestingUtility, which is marked 
Audience.public had a public function returning its instance, which moved this 
decision into gray area. Discussing in the community, it was decided that it 
would be okay to do so in this particular case.

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052354#comment-15052354
 ] 

Hudson commented on HBASE-14946:


SUCCESS: Integrated in HBase-1.2-IT #335 (See 
[https://builds.apache.org/job/HBase-1.2-IT/335/])
HBASE-14946 Don't allow multi's to over run the max result size. (stack: rev 
ac6a57b6b0723f2cdc29fad785d61bb0e6b862b4)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052353#comment-15052353
 ] 

Hudson commented on HBASE-14965:


SUCCESS: Integrated in HBase-1.2-IT #335 (See 
[https://builds.apache.org/job/HBase-1.2-IT/335/])
HBASE-14965 Remove un-used hbase-spark in branch-1 (eclark: rev 
7e036b4469c11670fd3e88c9cf68592fc95fa915)
* hbase-spark/src/test/resources/log4j.properties
* hbase-spark/src/test/resources/hbase-site.xml


> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052335#comment-15052335
 ] 

Hudson commented on HBASE-14745:


FAILURE: Integrated in HBase-Trunk_matrix #546 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/546/])
HBASE-14745 Shade the last few dependencies in hbase-shaded-client (eclark: rev 
abb2e95f66191588971c6bba800f6b0dcbd7ad37)
* pom.xml
* hbase-shaded/pom.xml


> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052337#comment-15052337
 ] 

Hudson commented on HBASE-14960:


FAILURE: Integrated in HBase-Trunk_matrix #546 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/546/])
HBASE-14960 Fallback to using default RPCControllerFactory if class (enis: rev 
cff664c5e286bebaddd93665680fb148783b8e7a)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java


> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052336#comment-15052336
 ] 

Hudson commented on HBASE-14946:


FAILURE: Integrated in HBase-Trunk_matrix #546 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/546/])
HBASE-14946 Don't allow multi's to over run the max result size. (eclark: rev 
48e217a7db8c23501ea4934d28e57684b82d71fb)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionImplementation.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
HBASE-14946 Don't allow multi's to over run the max result size. (stack: rev 
22b95aebcd7fc742412ab514520008fda5e327de)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052311#comment-15052311
 ] 

Hudson commented on HBASE-14745:


SUCCESS: Integrated in HBase-1.3-IT #368 (See 
[https://builds.apache.org/job/HBase-1.3-IT/368/])
HBASE-14745 Shade the last few dependencies in hbase-shaded-client (eclark: rev 
6163fb965d4b50f09c21992ec22fd3745ffb9d3f)
* hbase-shaded/pom.xml
* pom.xml


> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052313#comment-15052313
 ] 

Hudson commented on HBASE-14960:


SUCCESS: Integrated in HBase-1.3-IT #368 (See 
[https://builds.apache.org/job/HBase-1.3-IT/368/])
HBASE-14960 Fallback to using default RPCControllerFactory if class (enis: rev 
aeb6ae97d39e86e711a142975c6d73185c749bdd)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java


> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052310#comment-15052310
 ] 

Hudson commented on HBASE-14965:


SUCCESS: Integrated in HBase-1.3-IT #368 (See 
[https://builds.apache.org/job/HBase-1.3-IT/368/])
HBASE-14965 Remove un-used hbase-spark in branch-1 (eclark: rev 
1a5664060ae3b62cb4bf649598c7bb93cb564bba)
* hbase-spark/src/test/resources/log4j.properties
* hbase-spark/src/test/resources/hbase-site.xml


> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052312#comment-15052312
 ] 

Hudson commented on HBASE-14946:


SUCCESS: Integrated in HBase-1.3-IT #368 (See 
[https://builds.apache.org/job/HBase-1.3-IT/368/])
HBASE-14946 Don't allow multi's to over run the max result size. (stack: rev 
2f7d5e6354ca2ca5cbecae7bdd5df79d50848551)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052263#comment-15052263
 ] 

Lars Hofhansl commented on HBASE-14953:
---

If not objection I'm going to commit tomorrow.

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052262#comment-15052262
 ] 

Lars Hofhansl commented on HBASE-14953:
---

+1 on V2.
I assume it still fixes the issue you've seen? :)

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052258#comment-15052258
 ] 

Lars Hofhansl commented on HBASE-14951:
---

Yeah. Would love to get rid of this config option.

Where the *2 come from in the formula? Just to account for WALEdit being larger 
than a memstore Cell?

> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread Heng Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052248#comment-15052248
 ] 

Heng Chen commented on HBASE-14949:
---

{quote}
 And I checked the code again, for increment and append, we first get the row 
from region, do increment or append, and log the entire cell out as WAL, so it 
is also safe to replay it multiple times.
{quote}
Thanks [~stack] [~Apache9] for your reply.  So there is no need for this issue. 
 

{quote}
We should test if things go right when replaying same WAL entry multiple times. 
And I think we could do it along with the fixing of WAL file name conflict 
after splitting.
{quote}
OK,  it seems we could do some changes base on the first patch.  


> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052227#comment-15052227
 ] 

Devaraj Das commented on HBASE-14963:
-

[~busbey] I may have overlooked something. Let me get back (will resolve this 
issue if this issue has been addressed already)..

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14534) Bump yammer/coda/dropwizard metrics dependency version

2015-12-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-14534:

Status: Open  (was: Patch Available)

> Bump yammer/coda/dropwizard metrics dependency version
> --
>
> Key: HBASE-14534
> URL: https://issues.apache.org/jira/browse/HBASE-14534
> Project: HBase
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Mikhail Antonov
>Priority: Minor
> Attachments: HBASE-14534-v2.patch, HBASE-14534.patch, wip.patch
>
>
> After HBASE-12911 lands, let's update our dependency to the latest 
> incarnation of this library. I guess they're now calling it Dropwizard 
> Metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14534) Bump yammer/coda/dropwizard metrics dependency version

2015-12-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-14534:

Status: Patch Available  (was: Open)

> Bump yammer/coda/dropwizard metrics dependency version
> --
>
> Key: HBASE-14534
> URL: https://issues.apache.org/jira/browse/HBASE-14534
> Project: HBase
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Mikhail Antonov
>Priority: Minor
> Attachments: HBASE-14534-v2.patch, HBASE-14534.patch, wip.patch
>
>
> After HBASE-12911 lands, let's update our dependency to the latest 
> incarnation of this library. I guess they're now calling it Dropwizard 
> Metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14534) Bump yammer/coda/dropwizard metrics dependency version

2015-12-10 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-14534:

Attachment: HBASE-14534-v2.patch

Updated patch, fixed some long lines, fixed metric name in HTPP, rebased to 
current master. Verified HFPP output on web page and metrics dump there.

Question - how to better test metrics change didn't get changed anywhere 
(besides just looking with one's eyes)?

Also looks like dropwizard made some changes in their approach to metrics..one 
of the things, now by default histograms are using exponential decay rather 
than uniform sampling, is that ok for us to follow this approach here?

> Bump yammer/coda/dropwizard metrics dependency version
> --
>
> Key: HBASE-14534
> URL: https://issues.apache.org/jira/browse/HBASE-14534
> Project: HBase
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Mikhail Antonov
>Priority: Minor
> Attachments: HBASE-14534-v2.patch, HBASE-14534.patch, wip.patch
>
>
> After HBASE-12911 lands, let's update our dependency to the latest 
> incarnation of this library. I guess they're now calling it Dropwizard 
> Metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14966) TestInterfaceAudienceAnnotations doesn't work on QA bot

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14966?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052207#comment-15052207
 ] 

stack commented on HBASE-14966:
---

broke!

> TestInterfaceAudienceAnnotations doesn't work on QA bot
> ---
>
> Key: HBASE-14966
> URL: https://issues.apache.org/jira/browse/HBASE-14966
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> The test reports:
> {code}
> /home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build%402/hbase/hbase-common/target/classes/org/apache/hadoop/hbase
>  does not exist
> {code}
> However it then passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14966) TestInterfaceAudienceAnnotations doesn't work on QA bot

2015-12-10 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14966:
-

 Summary: TestInterfaceAudienceAnnotations doesn't work on QA bot
 Key: HBASE-14966
 URL: https://issues.apache.org/jira/browse/HBASE-14966
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark


The test reports:
{code}
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build%402/hbase/hbase-common/target/classes/org/apache/hadoop/hbase
 does not exist
{code}

However it then passes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14946:
--
Release Note: 
The HBase region server will now send a chunk of get responses to a client if 
the total response size is too large. This will only be done for clients 1.2.0 
and beyond. Older clients by default will have the old behavior.

This patch is for the case where the basic flow is like this:

I want to get a single column from lots of rows. So I create a list of gets. 
Then I send them to table.get(List). If the regions for that table are 
spread out then those requests get chunked out to all the region servers. No 
one regionserver gets too many. However if one region server contains lots of 
regions for that table then a multi action can contain lots of gets. No single 
get is too onerous. However the regionserver won't return until every get is 
complete. So if there are thousands of gets that are sent in one multi then the 
regionserver can retain lots of data in one thread.

  was:The HBase region server will now send a chunk of get responses to a 
client if the total response size is too large. This will only be done for 
clients 1.2.0 and beyond. Older clients by default will have the old behavior.


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052199#comment-15052199
 ] 

stack commented on HBASE-14946:
---

np

Updated the release note because it seemed a pity having your nice description 
of the issue that is fixed buried deep down.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052196#comment-15052196
 ] 

Elliott Clark commented on HBASE-14946:
---

Thanks [~stack]

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14946:
--
Attachment: 14946.add.missing.annotations.addendum.patch

Addendum to address failing test that complains of missing annotations.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: 14946.add.missing.annotations.addendum.patch, 
> HBASE-14946-v1.patch, HBASE-14946-v10.patch, HBASE-14946-v11.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14951) Make hbase.regionserver.maxlogs obsolete

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052187#comment-15052187
 ] 

stack commented on HBASE-14951:
---

This is an interesting change ( [~lhofhansl]... this is something you've been 
on about for ever.)

The nice table does should go into the release notes.

Also, any chance of a longer exposition on why the formula (I grep -r 
memstoreSizeRatio but find nothing... am I doing it wrong?)? Add this to 
release notes too because we need to hoist this up into the documentation for 
2.0.

Suggest marking this an incompatible change because it changes a fundamental 
behavior.

On this:

bq. .. It was an agreement that we should calculate this number in a code 
but still need to honor user's setting

... looking at patch, it seems to do as expected with nice warning

Patch seems good. Its a fundamental change in our behavior so needs the fat 
release note.  With that +1 (if the explanation makes sense -- smile)


> Make hbase.regionserver.maxlogs obsolete
> 
>
> Key: HBASE-14951
> URL: https://issues.apache.org/jira/browse/HBASE-14951
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14951-v1.patch, HBASE-14951-v2.patch
>
>
> There was a discussion in HBASE-14388 related to maximum number of log files. 
> It was an agreement that we should calculate this number in a code but still 
> need to honor user's setting. 
> Maximum number of log files now is calculated as following:
>  maxLogs = HEAP_SIZE * memstoreRatio * 2/ LogRollSize



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14468) Compaction improvements: FIFO compaction policy

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052188#comment-15052188
 ] 

stack commented on HBASE-14468:
---

Any chance to look at above? Thanks.

> Compaction improvements: FIFO compaction policy
> ---
>
> Key: HBASE-14468
> URL: https://issues.apache.org/jira/browse/HBASE-14468
> Project: HBase
>  Issue Type: Improvement
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14468-v1.patch, HBASE-14468-v10.patch, 
> HBASE-14468-v2.patch, HBASE-14468-v3.patch, HBASE-14468-v4.patch, 
> HBASE-14468-v5.patch, HBASE-14468-v6.patch, HBASE-14468-v7.patch, 
> HBASE-14468-v8.patch, HBASE-14468-v9.patch
>
>
> h2. FIFO Compaction
> h3. Introduction
> FIFO compaction policy selects only files which have all cells expired. The 
> column family MUST have non-default TTL. 
> Essentially, FIFO compactor does only one job: collects expired store files. 
> These are some applications which could benefit the most:
> # Use it for very high volume raw data which has low TTL and which is the 
> source of another data (after additional processing). Example: Raw 
> time-series vs. time-based rollup aggregates and compacted time-series. We 
> collect raw time-series and store them into CF with FIFO compaction policy, 
> periodically we run  task which creates rollup aggregates and compacts 
> time-series, the original raw data can be discarded after that.
> # Use it for data which can be kept entirely in a a block cache (RAM/SSD). 
> Say we have local SSD (1TB) which we can use as a block cache. No need for 
> compaction of a raw data at all.
> Because we do not do any real compaction, we do not use CPU and IO (disk and 
> network), we do not evict hot data from a block cache. The result: improved 
> throughput and latency both write and read.
> See: https://github.com/facebook/rocksdb/wiki/FIFO-compaction-style
> h3. To enable FIFO compaction policy
> For table:
> {code}
> HTableDescriptor desc = new HTableDescriptor(tableName);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code} 
> For CF:
> {code}
> HColumnDescriptor desc = new HColumnDescriptor(family);
> 
> desc.setConfiguration(DefaultStoreEngine.DEFAULT_COMPACTION_POLICY_CLASS_KEY, 
>   FIFOCompactionPolicy.class.getName());
> {code}
> Although region splitting is supported,  for optimal performance it should be 
> disabled, either by setting explicitly DisabledRegionSplitPolicy or by 
> setting ConstantSizeRegionSplitPolicy and very large max region size. You 
> will have to increase to a very large number store's blocking file number : 
> *hbase.hstore.blockingStoreFiles* as well.
>  
> h3. Limitations
> Do not use FIFO compaction if :
> * Table/CF has MIN_VERSION > 0
> * Table/CF has TTL = FOREVER (HColumnDescriptor.DEFAULT_TTL)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052182#comment-15052182
 ] 

Duo Zhang commented on HBASE-14949:
---

{quote}
Should we add a test that asserts this finding/expectation?
{quote}
Agree. We should test if things go right when replaying same WAL entry multiple 
times. And I think we could do it along with the fixing of WAL file name 
conflict after splitting.

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052157#comment-15052157
 ] 

Hudson commented on HBASE-14960:


FAILURE: Integrated in HBase-1.3 #431 (See 
[https://builds.apache.org/job/HBase-1.3/431/])
HBASE-14960 Fallback to using default RPCControllerFactory if class (enis: rev 
aeb6ae97d39e86e711a142975c6d73185c749bdd)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java


> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052156#comment-15052156
 ] 

Hudson commented on HBASE-14946:


FAILURE: Integrated in HBase-1.3 #431 (See 
[https://builds.apache.org/job/HBase-1.3/431/])
HBASE-14946 Don't allow multi's to over run the max result size. (eclark: rev 
8508dd07ff8038f6df192087c308b816baeac29d)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052154#comment-15052154
 ] 

Hudson commented on HBASE-14965:


FAILURE: Integrated in HBase-1.3 #431 (See 
[https://builds.apache.org/job/HBase-1.3/431/])
HBASE-14965 Remove un-used hbase-spark in branch-1 (eclark: rev 
1a5664060ae3b62cb4bf649598c7bb93cb564bba)
* hbase-spark/src/test/resources/log4j.properties
* hbase-spark/src/test/resources/hbase-site.xml


> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052155#comment-15052155
 ] 

Hudson commented on HBASE-14745:


FAILURE: Integrated in HBase-1.3 #431 (See 
[https://builds.apache.org/job/HBase-1.3/431/])
HBASE-14745 Shade the last few dependencies in hbase-shaded-client (eclark: rev 
6163fb965d4b50f09c21992ec22fd3745ffb9d3f)
* pom.xml
* hbase-shaded/pom.xml


> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-8676) hbase.hstore.compaction.max.size parameter has no effect in minor compaction

2015-12-10 Thread wangyongqiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052144#comment-15052144
 ] 

wangyongqiang commented on HBASE-8676:
--

can skip all large files in the file list, not just the first one




On 2015-09-02 01:52 , Sergey Shelukhin (JIRA) Wrote:


[ 
https://issues.apache.org/jira/browse/HBASE-8676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Shelukhin updated HBASE-8676:

   Assignee: (was: Sergey Shelukhin)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> hbase.hstore.compaction.max.size parameter has no effect in minor compaction
> 
>
> Key: HBASE-8676
> URL: https://issues.apache.org/jira/browse/HBASE-8676
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 0.94.7
>Reporter: Calvin Qu
>
> In hbase-site.xml,I set(10G)
>   
> hbase.hstore.compaction.max.size
> 10737418240
>   
> but in regionserver's log,a minor compaction deals with 10 Hfiles,include a 
> 220G HFile larger than hbase.hstore.compaction.max.size=10G.
> 2013-06-03 10:27:51,147 DEBUG org.apache.hadoop.hbase.regionserver.Store: 
> 8b6a4d4aae3099730b353183cf754ec4 - t: Initiating minorcompaction
> 2013-06-03 10:27:51,149 INFO org.apache.hadoop.hbase.regionserver.HRegion: 
> Starting compaction on t in region 
> transdb,1651763699103243149223370692144515807,1369372490168.8b6a4d4aae3099730b353183cf754ec4.
> 2013-06-03 10:27:51,150 DEBUG 
> org.apache.hadoop.hbase.regionserver.CompactSplitThread: Large Compaction 
> requested: 
> regionName=transdb,1651763699103243149223370692144515807,1369372490168.8b6a4d4aae3099730b353183cf754ec4.,
>  storeName=t, fileCount=10, fileSize=234.4g (106.0m, 2.0g, 2.1g, 10.7m, 1.7g, 
> 2.1g, 1.9g, 2.0g, 932.8m, 221.5g), priority=-110, time=2671513455305347; 
> Because: Opening Region; compaction_queue=(0:0), split_queue=0
> 2013-06-03 10:27:51,151 INFO org.apache.hadoop.hbase.regionserver.Store: 
> Starting compaction of 10 file(s) in t of 
> transdb,1651763699103243149223370692144515807,1369372490168.8b6a4d4aae3099730b353183cf754ec4.
>  into 
> tmpdir=hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/.tmp,
>  seqid=0, totalSize=234.4g
> 2013-06-03 10:27:51,152 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/0a5fb3caf6be490ba739b77fcaa4b274.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/0a5fb3caf6be490ba739b77fcaa4b274-top,
>  keycount=7948080, bloomtype=NONE, size=106.0m, encoding=NONE
> 2013-06-03 10:27:51,153 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/6bd849c93e3a45d4a4c8a5ab691f9b22.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/6bd849c93e3a45d4a4c8a5ab691f9b22-top,
>  keycount=153911753, bloomtype=NONE, size=2.0g, encoding=NONE
> 2013-06-03 10:27:51,153 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/191ef6d1076341e1b7fb5c775d0f9fdb.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/191ef6d1076341e1b7fb5c775d0f9fdb-top,
>  keycount=159372725, bloomtype=NONE, size=2.1g, encoding=NONE
> 2013-06-03 10:27:51,154 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/d907a7dca7d8466f843f126429f05665.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/d907a7dca7d8466f843f126429f05665-top,
>  keycount=808800, bloomtype=NONE, size=10.7m, encoding=NONE
> 2013-06-03 10:27:51,154 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/35399e73e3764d0b88251c7285cf5406.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/35399e73e3764d0b88251c7285cf5406-top,
>  keycount=133146091, bloomtype=NONE, size=1.7g, encoding=NONE
> 2013-06-03 10:27:51,154 DEBUG org.apache.hadoop.hbase.regionserver.Compactor: 
> Compacting 
> hdfs://namenode01:9000/hbase/transdb/8b6a4d4aae3099730b353183cf754ec4/t/9d5583ffde334bc79a692d400af7bae0.c7ce1266ef1f8696ede82f08fceb28ff-hdfs://namenode01:9000/hbase/transdb/c7ce1266ef1f8696ede82f08fceb28ff/t/9d5583ffde334bc79a692d400af7bae0-top,
>  keycount=159372708, bloomtype=NONE, size=2.1g, encoding=NONE
> 2013-06-03 10:27:51,155 DEBUG org.apache.hadoop.h

[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052140#comment-15052140
 ] 

stack commented on HBASE-14949:
---

Should we add a test that asserts this finding/expectation?

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052136#comment-15052136
 ] 

Duo Zhang commented on HBASE-14949:
---

Oh, seems fine. {{DefaultMemStore}} uses a {{ConcurrentSkipListMap}}, 
{{KeyValueHeap}} uses a {{PriorityQueue}}. They both use a {{CellComparator}} 
to compare entries and will replace old entry when equal. A {{CellComparator}} 
first compares rowkey, then family, qualifier, timestamp, and last compares 
sequence id. In our scenario, all of them are equal so the new entry will 
overwrite the old entry.

Thanks.

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14534) Bump yammer/coda/dropwizard metrics dependency version

2015-12-10 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052133#comment-15052133
 ] 

Mikhail Antonov commented on HBASE-14534:
-

[~ndimiduk] any feedback? I'm fixing few small things, will post updated patch 
shortly.

> Bump yammer/coda/dropwizard metrics dependency version
> --
>
> Key: HBASE-14534
> URL: https://issues.apache.org/jira/browse/HBASE-14534
> Project: HBase
>  Issue Type: Task
>Reporter: Nick Dimiduk
>Assignee: Mikhail Antonov
>Priority: Minor
> Attachments: HBASE-14534.patch, wip.patch
>
>
> After HBASE-12911 lands, let's update our dependency to the latest 
> incarnation of this library. I guess they're now calling it Dropwizard 
> Metrics.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052130#comment-15052130
 ] 

Hudson commented on HBASE-14960:


FAILURE: Integrated in HBase-1.2 #436 (See 
[https://builds.apache.org/job/HBase-1.2/436/])
HBASE-14960 Fallback to using default RPCControllerFactory if class (enis: rev 
136d5aabb2e71b6c394e40b2c4f0a500d0bf21c5)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java


> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052128#comment-15052128
 ] 

Hudson commented on HBASE-14745:


FAILURE: Integrated in HBase-1.2 #436 (See 
[https://builds.apache.org/job/HBase-1.2/436/])
HBASE-14745 Shade the last few dependencies in hbase-shaded-client (eclark: rev 
9ed1793c28a6702e4940eabb6b17b5049bf4914b)
* hbase-shaded/pom.xml
* pom.xml


> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052127#comment-15052127
 ] 

Hudson commented on HBASE-14965:


FAILURE: Integrated in HBase-1.2 #436 (See 
[https://builds.apache.org/job/HBase-1.2/436/])
HBASE-14965 Remove un-used hbase-spark in branch-1 (eclark: rev 
7e036b4469c11670fd3e88c9cf68592fc95fa915)
* hbase-spark/src/test/resources/log4j.properties
* hbase-spark/src/test/resources/hbase-site.xml


> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052129#comment-15052129
 ] 

Hudson commented on HBASE-14946:


FAILURE: Integrated in HBase-1.2 #436 (See 
[https://builds.apache.org/job/HBase-1.2/436/])
HBASE-14946 Don't allow multi's to over run the max result size. (eclark: rev 
8953da28cb3ddc22a56661b35657aaa68f445a7a)
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052120#comment-15052120
 ] 

Hudson commented on HBASE-14769:


FAILURE: Integrated in HBase-Trunk_matrix #545 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/545/])
HBASE-14769 Remove unused functions and duplicate javadocs from (stack: rev 
bebcc09fb392b3494131c792520406c001dbd511)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMetaWithReplicas.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestAcidGuarantees.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mob/TestExpiredMobFileCleaner.java
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAdmin2.java
* src/main/asciidoc/_chapters/schema_design.adoc
* hbase-shell/src/test/ruby/hbase/admin_test.rb
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Admin.java
* hbase-shell/src/main/ruby/hbase/visibility_labels.rb
* src/main/asciidoc/_chapters/cp.adoc
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/compactions/TestFIFOCompactionPolicy.java
* src/main/asciidoc/_chapters/ops_mgt.adoc
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HBaseAdmin.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestIngestWithMOB.java
* hbase-shell/src/main/ruby/hbase/security.rb
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-it/src/test/java/org/apache/hadoop/hbase/IntegrationTestDDLMasterFailover.java
* src/main/asciidoc/_chapters/external_apis.adoc


> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14906) Improvements on FlushLargeStoresPolicy

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052117#comment-15052117
 ] 

Hudson commented on HBASE-14906:


FAILURE: Integrated in HBase-Trunk_matrix #545 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/545/])
HBASE-14906 Improvements on FlushLargeStoresPolicy (Yu Li) (stack: rev 
c15e0af84aeb4ab992482a957c2b242d2ab57d76)
* hbase-common/src/main/resources/hbase-default.xml
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/FlushLargeStoresPolicy.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestPerColumnFamilyFlush.java


> Improvements on FlushLargeStoresPolicy
> --
>
> Key: HBASE-14906
> URL: https://issues.apache.org/jira/browse/HBASE-14906
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14906.patch, HBASE-14906.v2.patch, 
> HBASE-14906.v3.patch, HBASE-14906.v4.patch, HBASE-14906.v4.patch
>
>
> When checking FlushLargeStoragePolicy, found below possible improving points:
> 1. Currently in selectStoresToFlush, we will do the selection no matter how 
> many actual families, which is not necessary for one single family
> 2. Default value for hbase.hregion.percolumnfamilyflush.size.lower.bound 
> could not fit in all cases, and requires user to know details of the 
> implementation to properly set it. We propose to use 
> "hbase.hregion.memstore.flush.size/column_family_number" instead:
> {noformat}
>   
> hbase.hregion.percolumnfamilyflush.size.lower.bound
> 16777216
> 
> If FlushLargeStoresPolicy is used and there are multiple column families,
> then every time that we hit the total memstore limit, we find out all the
> column families whose memstores exceed a "lower bound" and only flush them
> while retaining the others in memory. The "lower bound" will be
> "hbase.hregion.memstore.flush.size / column_family_number" by default
> unless value of this property is larger than that. If none of the families
> have their memstore size more than lower bound, all the memstores will be
> flushed (just as usual).
> 
>   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052119#comment-15052119
 ] 

Hudson commented on HBASE-14941:


FAILURE: Integrated in HBase-Trunk_matrix #545 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/545/])
HBASE-14941 locate_region shell command (matteo.bertozzi: rev 
6f8d5e86cee2554ebbe6b4d34d828deff04aa894)
* hbase-shell/src/main/ruby/shell.rb
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-shell/src/main/ruby/shell/commands/locate_region.rb


> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14941-v1_branch-1.patch, 
> HBASE-14941-v2_branch-1.patch, HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14901) There is duplicated code to create/manage encryption keys

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052118#comment-15052118
 ] 

Hudson commented on HBASE-14901:


FAILURE: Integrated in HBase-Trunk_matrix #545 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/545/])
HBASE-14901 Remove duplicate code to create/manage encryption keys (garyh: rev 
9511150bd60e5149856c23c90422e2da7114892e)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/mapreduce/MemStoreWrapper.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* 
hbase-client/src/test/java/org/apache/hadoop/hbase/security/TestEncryptionUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mob/MobUtils.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/EncryptionUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mob/compactions/PartitionedMobCompactor.java


> There is duplicated code to create/manage encryption keys
> -
>
> Key: HBASE-14901
> URL: https://issues.apache.org/jira/browse/HBASE-14901
> Project: HBase
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14901.1.patch, HBASE-14901.2.patch, 
> HBASE-14901.3.patch, HBASE-14901.5.patch, HBASE-14901.6.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052105#comment-15052105
 ] 

Hudson commented on HBASE-14960:


SUCCESS: Integrated in HBase-1.2-IT #334 (See 
[https://builds.apache.org/job/HBase-1.2-IT/334/])
HBASE-14960 Fallback to using default RPCControllerFactory if class (enis: rev 
136d5aabb2e71b6c394e40b2c4f0a500d0bf21c5)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/ipc/RpcControllerFactory.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestRpcControllerFactory.java


> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052104#comment-15052104
 ] 

Hudson commented on HBASE-14946:


SUCCESS: Integrated in HBase-1.2-IT #334 (See 
[https://builds.apache.org/job/HBase-1.2-IT/334/])
HBASE-14946 Don't allow multi's to over run the max result size. (eclark: rev 
8953da28cb3ddc22a56661b35657aaa68f445a7a)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052103#comment-15052103
 ] 

Hudson commented on HBASE-14745:


SUCCESS: Integrated in HBase-1.2-IT #334 (See 
[https://builds.apache.org/job/HBase-1.2-IT/334/])
HBASE-14745 Shade the last few dependencies in hbase-shaded-client (eclark: rev 
9ed1793c28a6702e4940eabb6b17b5049bf4914b)
* pom.xml
* hbase-shaded/pom.xml


> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2015-12-10 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-14960:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
Release Note: If the configured RPC controller factory (via 
hbase.rpc.controllerfactory.class) cannot be found in the classpath or loaded, 
we fall back to using the default RPC controller factory in HBase.
  Status: Resolved  (was: Patch Available)

Committed this. Thanks for looking. 

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: hbase-14960_v1.patch, hbase-14960_v2.patch, 
> hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052076#comment-15052076
 ] 

Matteo Bertozzi commented on HBASE-14965:
-

+1

> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14965:
--
Fix Version/s: 1.3.0
   1.2.0
Affects Version/s: 1.3.0
   1.2.0
   Status: Patch Available  (was: Open)

> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14965:
--
Attachment: HBASE-14965.0-branch-1.patch

> Remove un-used hbase-spark in branch-1 +
> 
>
> Key: HBASE-14965
> URL: https://issues.apache.org/jira/browse/HBASE-14965
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14965.0-branch-1.patch
>
>
> Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14965) Remove un-used hbase-spark in branch-1 +

2015-12-10 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14965:
-

 Summary: Remove un-used hbase-spark in branch-1 +
 Key: HBASE-14965
 URL: https://issues.apache.org/jira/browse/HBASE-14965
 Project: HBase
  Issue Type: Bug
  Components: build
Reporter: Elliott Clark
Assignee: Elliott Clark


Seems like some files for this new feature slipped in to an old branch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052072#comment-15052072
 ] 

Duo Zhang commented on HBASE-14949:
---

And the only thing I am still a little worry about is how memstore and 
compaction deal with two cells with same rowkey, same family, same qualifier 
and same timestamp. Let me check the code and have a try.

Thanks.

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Release Note: The HBase region server will now send a chunk of get 
responses to a client if the total response size is too large. This will only 
be done for clients 1.2.0 and beyond. Older clients by default will have the 
old behavior.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052065#comment-15052065
 ] 

Hudson commented on HBASE-14946:


SUCCESS: Integrated in HBase-1.3-IT #367 (See 
[https://builds.apache.org/job/HBase-1.3-IT/367/])
HBASE-14946 Don't allow multi's to over run the max result size. (eclark: rev 
8508dd07ff8038f6df192087c308b816baeac29d)
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/RetryImmediatelyException.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServer.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSource.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/MultiActionResultTooLarge.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/master/procedure/ProcedurePrepareLatch.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestMultiRespectsLimits.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/ConnectionManager.java
* 
hbase-hadoop2-compat/src/main/java/org/apache/hadoop/hbase/ipc/MetricsHBaseServerSourceImpl.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/client/VersionInfoUtil.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncProcess.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcCallContext.java


> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14745) Shade the last few dependencies in hbase-shaded-client

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14745:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Shade the last few dependencies in hbase-shaded-client
> --
>
> Key: HBASE-14745
> URL: https://issues.apache.org/jira/browse/HBASE-14745
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Blocker
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14745-v1.patch, HBASE-14745.patch
>
>
> * junit
> * hadoop common



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14906) Improvements on FlushLargeStoresPolicy

2015-12-10 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052062#comment-15052062
 ] 

Yu Li commented on HBASE-14906:
---

Thanks for help review and commit sir! [~stack]

> Improvements on FlushLargeStoresPolicy
> --
>
> Key: HBASE-14906
> URL: https://issues.apache.org/jira/browse/HBASE-14906
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14906.patch, HBASE-14906.v2.patch, 
> HBASE-14906.v3.patch, HBASE-14906.v4.patch, HBASE-14906.v4.patch
>
>
> When checking FlushLargeStoragePolicy, found below possible improving points:
> 1. Currently in selectStoresToFlush, we will do the selection no matter how 
> many actual families, which is not necessary for one single family
> 2. Default value for hbase.hregion.percolumnfamilyflush.size.lower.bound 
> could not fit in all cases, and requires user to know details of the 
> implementation to properly set it. We propose to use 
> "hbase.hregion.memstore.flush.size/column_family_number" instead:
> {noformat}
>   
> hbase.hregion.percolumnfamilyflush.size.lower.bound
> 16777216
> 
> If FlushLargeStoresPolicy is used and there are multiple column families,
> then every time that we hit the total memstore limit, we find out all the
> column families whose memstores exceed a "lower bound" and only flush them
> while retaining the others in memory. The "lower bound" will be
> "hbase.hregion.memstore.flush.size / column_family_number" by default
> unless value of this property is larger than that. If none of the families
> have their memstore size more than lower bound, all the memstores will be
> flushed (just as usual).
> 
>   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052064#comment-15052064
 ] 

Duo Zhang commented on HBASE-14949:
---

{quote}
SequenceId has a region scope. If you play an edit into a Region twice, it is 
fine.
{quote}
So seems we do not need to skip duplicated WAL when replaying? We have a 
timestamp in WAL entry so it is safe to replay multiple times. And I checked 
the code again, for increment and append, we first get the row from region, do 
increment or append, and log the entire cell out as WAL, so it is also safe to 
replay it multiple times.

> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052040#comment-15052040
 ] 

Hudson commented on HBASE-14941:


SUCCESS: Integrated in HBase-1.3 #430 (See 
[https://builds.apache.org/job/HBase-1.3/430/])
HBASE-14941 locate_region shell command (matteo.bertozzi: rev 
2d74dcfadcb216a19b7502590f93cc2b350a7546)
* hbase-shell/src/main/ruby/shell.rb
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-shell/src/main/ruby/shell/commands/locate_region.rb


> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14941-v1_branch-1.patch, 
> HBASE-14941-v2_branch-1.patch, HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-10 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052024#comment-15052024
 ] 

Zhan Zhang commented on HBASE-14795:


Thanks [~ted.m] and [~ted_yu] for the help. [~ted.m]If you don't mind, please 
share some information regarding your testing and any issue you find.

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch, 
> HBASE-14795-4.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14946:
--
Attachment: HBASE-14946-v11.patch

Here's what I'm planning on committing.

Mostly just nits but also moved the version info parsing to connection set up.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v11.patch, HBASE-14946-v2.patch, HBASE-14946-v3.patch, 
> HBASE-14946-v5.patch, HBASE-14946-v6.patch, HBASE-14946-v7.patch, 
> HBASE-14946-v8.patch, HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052007#comment-15052007
 ] 

Hudson commented on HBASE-14941:


SUCCESS: Integrated in HBase-1.2 #435 (See 
[https://builds.apache.org/job/HBase-1.2/435/])
HBASE-14941 locate_region shell command (matteo.bertozzi: rev 
512144ed26c45d96638be78da8bde402940c6224)
* hbase-shell/src/main/ruby/shell.rb
* hbase-shell/src/main/ruby/hbase/admin.rb
* hbase-shell/src/main/ruby/shell/commands/locate_region.rb


> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14941-v1_branch-1.patch, 
> HBASE-14941-v2_branch-1.patch, HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14918) In-Memory MemStore Flush and Compaction

2015-12-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14918:
--
Assignee: Eshcar Hillel

> In-Memory MemStore Flush and Compaction
> ---
>
> Key: HBASE-14918
> URL: https://issues.apache.org/jira/browse/HBASE-14918
> Project: HBase
>  Issue Type: Umbrella
>Affects Versions: 2.0.0
>Reporter: Eshcar Hillel
>Assignee: Eshcar Hillel
>
> A memstore serves as the in-memory component of a store unit, absorbing all 
> updates to the store. From time to time these updates are flushed to a file 
> on disk, where they are compacted (by eliminating redundancies) and 
> compressed (i.e., written in a compressed format to reduce their storage 
> size).
> We aim to speed up data access, and therefore suggest to apply in-memory 
> memstore flush. That is to flush the active in-memory segment into an 
> intermediate buffer where it can be accessed by the application. Data in the 
> buffer is subject to compaction and can be stored in any format that allows 
> it to take up smaller space in RAM. The less space the buffer consumes the 
> longer it can reside in memory before data is flushed to disk, resulting in 
> better performance.
> Specifically, the optimization is beneficial for workloads with 
> medium-to-high key churn which incur many redundant cells, like persistent 
> messaging. 
> We suggest to structure the solution as 3 subtasks (respectively, patches). 
> (1) Infrastructure - refactoring of the MemStore hierarchy, introducing 
> segment (StoreSegment) as first-class citizen, and decoupling memstore 
> scanner from the memstore implementation;
> (2) Implementation of a new memstore (CompactingMemstore) with non-optimized 
> immutable segment representation, and 
> (3) Memory optimization including compressed format representation and 
> offheap allocations.
> This Jira continues the discussion in HBASE-13408.
> Design documents, evaluation results and previous patches can be found in 
> HBASE-13408. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051980#comment-15051980
 ] 

Sean Busbey commented on HBASE-14963:
-

isn't this why we have a shaded client already?

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Ashu Pachauri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashu Pachauri updated HBASE-14953:
--
Status: Patch Available  (was: Open)

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14949) Skip duplicate entries when replay WAL.

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051961#comment-15051961
 ] 

stack commented on HBASE-14949:
---

SequenceId has a region scope. If you play an edit into a Region twice, it is 
fine.

If you want to skip edits already replayed, you could do something like the 
mechanism we have where master skips all edits that are less than the highest 
sequenceid that has been saved to an hfile (for that region). Regionservers 
report to the master the highest flushed sequenceid per region on their 
heartbeat. It master crashes, it looses this Map and so will replay edits that 
the Region has already seen but no harm done, just resources consumed.

To skip replaying edits already seen, you might keep a running low-water mark 
per region in the Master memory of the last edit sequenceid shipped to a 
region.  If Master crashes, we'd lose this Map and we'd replay edits more than 
once but no harm done just resources consumed.

Looking at patch...

We need RecoveryFileContext? Doesn't Reader have most of this in it?

What about the case where both files have same first edit in it? i.e. we open a 
file try to write ten edits to it ... sequenceid 0, 1, 2...10... and we fail so 
we open new WAL and try to play the same ten.. Replaying, both files will have 
a sequenceid of 0 as first entry?



> Skip duplicate entries when replay WAL.
> ---
>
> Key: HBASE-14949
> URL: https://issues.apache.org/jira/browse/HBASE-14949
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Heng Chen
> Attachments: HBASE-14949.patch, HBASE-14949_v1.patch
>
>
> As HBASE-14004 design,  there will be duplicate entries in different WAL.  It 
> happens when one hflush failed, we will close old WAL with 'acked hflushed' 
> length,  then open a new WAL and write the unacked hlushed entries into it.
> So there maybe some overlap between old WAL and new WAL.
> We should skip the duplicate entries when replay.  I think it has no harm to 
> current logic, maybe we do it first. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14946) Don't allow multi's to over run the max result size.

2015-12-10 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051938#comment-15051938
 ] 

Gary Helmling commented on HBASE-14946:
---

Left a review over in phabricator.  +1 on the latest patch, just a few nits to 
fix up on commit.

> Don't allow multi's to over run the max result size.
> 
>
> Key: HBASE-14946
> URL: https://issues.apache.org/jira/browse/HBASE-14946
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>Priority: Critical
> Attachments: HBASE-14946-v1.patch, HBASE-14946-v10.patch, 
> HBASE-14946-v2.patch, HBASE-14946-v3.patch, HBASE-14946-v5.patch, 
> HBASE-14946-v6.patch, HBASE-14946-v7.patch, HBASE-14946-v8.patch, 
> HBASE-14946-v9.patch, HBASE-14946.patch
>
>
> If a user puts a list of tons of different gets into a table we will then 
> send them along in a multi. The server un-wraps each get in the multi. While 
> no single get may be over the size limit the total might be.
> We should protect the server from this. 
> We should batch up on the server side so each RPC is smaller.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051939#comment-15051939
 ] 

Elliott Clark commented on HBASE-14962:
---

420baa42cebb331771806964deedc5fe8e5313dd is good. WOW that's far back

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-10 Thread Ted Malaska (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051936#comment-15051936
 ] 

Ted Malaska commented on HBASE-14795:
-

That was a cool addition.  I like the wrapping of the function to catch the 
exceptions.  

I'm +1 also next week I'm going to run this on a 10 billion record + dataset 
just to see it in action.

Since I'm not a commenter I don't know if my +1 means much, but you have it.

Thanks Zhan

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch, 
> HBASE-14795-4.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14906) Improvements on FlushLargeStoresPolicy

2015-12-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14906:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks you for the patch [~carp84] Thats a nice release note. 
I was tempted to pull it back to 1.2 since that is first release with per 
column family flushing enabled by default but it is a bit late in the game for 
1.2 at this stage. Nice patch.

> Improvements on FlushLargeStoresPolicy
> --
>
> Key: HBASE-14906
> URL: https://issues.apache.org/jira/browse/HBASE-14906
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14906.patch, HBASE-14906.v2.patch, 
> HBASE-14906.v3.patch, HBASE-14906.v4.patch, HBASE-14906.v4.patch
>
>
> When checking FlushLargeStoragePolicy, found below possible improving points:
> 1. Currently in selectStoresToFlush, we will do the selection no matter how 
> many actual families, which is not necessary for one single family
> 2. Default value for hbase.hregion.percolumnfamilyflush.size.lower.bound 
> could not fit in all cases, and requires user to know details of the 
> implementation to properly set it. We propose to use 
> "hbase.hregion.memstore.flush.size/column_family_number" instead:
> {noformat}
>   
> hbase.hregion.percolumnfamilyflush.size.lower.bound
> 16777216
> 
> If FlushLargeStoresPolicy is used and there are multiple column families,
> then every time that we hit the total memstore limit, we find out all the
> column families whose memstores exceed a "lower bound" and only flush them
> while retaining the others in memory. The "lower bound" will be
> "hbase.hregion.memstore.flush.size / column_family_number" by default
> unless value of this property is larger than that. If none of the families
> have their memstore size more than lower bound, all the memstores will be
> flushed (just as usual).
> 
>   
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051925#comment-15051925
 ] 

stack commented on HBASE-14962:
---

Or narrowing inon a commit between mid sept and mid oct?

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051924#comment-15051924
 ] 

stack commented on HBASE-14962:
---

You stepping back then [~eclark]?

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051923#comment-15051923
 ] 

stack commented on HBASE-14963:
---

+1 on this patch in meantime.

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051880#comment-15051880
 ] 

Devaraj Das commented on HBASE-14963:
-

Yes [~stack] that would work for sure. For now, we saw the issue with the 
Stopwatch class only, and hence the patch to only handle that.. But yeah I 
agree that shading is a better approach overall.

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051874#comment-15051874
 ] 

Elliott Clark commented on HBASE-14962:
---

2040d8dfc2a6c3910336a903b21a122ac025352e is bad too.
1f5663a7f560a222981db2345e2035f749f62f9b is also bad.

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051868#comment-15051868
 ] 

Hudson commented on HBASE-14941:


SUCCESS: Integrated in HBase-1.2-IT #333 (See 
[https://builds.apache.org/job/HBase-1.2-IT/333/])
HBASE-14941 locate_region shell command (matteo.bertozzi: rev 
512144ed26c45d96638be78da8bde402940c6224)
* hbase-shell/src/main/ruby/shell/commands/locate_region.rb
* hbase-shell/src/main/ruby/shell.rb
* hbase-shell/src/main/ruby/hbase/admin.rb


> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14941-v1_branch-1.patch, 
> HBASE-14941-v2_branch-1.patch, HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051869#comment-15051869
 ] 

Elliott Clark commented on HBASE-14953:
---

Yeah I must have never tried with coreThreads = maxThreads. lgtm.

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-14964:
--
Status: Patch Available  (was: Open)

> Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage 
> encryption keys
> ---
>
> Key: HBASE-14964
> URL: https://issues.apache.org/jira/browse/HBASE-14964
> Project: HBase
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14964.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.
> A lot of the Mob stuff hasn't been backported, so this is a very small patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-14964:
--
Affects Version/s: (was: 1.3.0)
Fix Version/s: 1.2.0
   Issue Type: Improvement  (was: Bug)

> Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage 
> encryption keys
> ---
>
> Key: HBASE-14964
> URL: https://issues.apache.org/jira/browse/HBASE-14964
> Project: HBase
>  Issue Type: Improvement
>  Components: encryption
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14964.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.
> A lot of the Mob stuff hasn't been backported, so this is a very small patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-14964:
--
Attachment: HBASE-14964.patch

> Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage 
> encryption keys
> ---
>
> Key: HBASE-14964
> URL: https://issues.apache.org/jira/browse/HBASE-14964
> Project: HBase
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 1.3.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 1.3.0
>
> Attachments: HBASE-14964.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.
> A lot of the Mob stuff hasn't been backported, so this is a very small patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-14964:
--
Affects Version/s: (was: 2.0.0)
   1.3.0
Fix Version/s: (was: 2.0.0)
   1.3.0

> Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage 
> encryption keys
> ---
>
> Key: HBASE-14964
> URL: https://issues.apache.org/jira/browse/HBASE-14964
> Project: HBase
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 1.3.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 1.3.0
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.
> A lot of the Mob stuff hasn't been backported, so this is a very small patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051862#comment-15051862
 ] 

stack commented on HBASE-14963:
---

How about shading (and upgrading) guava? Would that work? Guava has loads of 
goodies in it but its problematic if bare on our classpath given it evolves so 
quickly and everyone is stuck on a different incompatible version?

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nate Edel updated HBASE-14964:
--
Description: 
There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
there is a subset of that code in HFileReaderImpl.

Refactored key selection 
Moved both to EncryptionUtil.java
Can't figure out how to write a unit test for this, but there's no new code 
just refactoring.

A lot of the Mob stuff hasn't been backported, so this is a very small patch.

  was:
There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
there is a subset of that code in HFileReaderImpl.

Refactored key selection 
Moved both to EncryptionUtil.java
Can't figure out how to write a unit test for this, but there's no new code 
just refactoring.


> Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage 
> encryption keys
> ---
>
> Key: HBASE-14964
> URL: https://issues.apache.org/jira/browse/HBASE-14964
> Project: HBase
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 2.0.0
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.
> A lot of the Mob stuff hasn't been backported, so this is a very small patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14964) Backport HBASE-14901 to brach-1 - There is duplicated code to create/manage encryption keys

2015-12-10 Thread Nate Edel (JIRA)
Nate Edel created HBASE-14964:
-

 Summary: Backport HBASE-14901 to brach-1 - There is duplicated 
code to create/manage encryption keys
 Key: HBASE-14964
 URL: https://issues.apache.org/jira/browse/HBASE-14964
 Project: HBase
  Issue Type: Bug
  Components: encryption
Affects Versions: 2.0.0
Reporter: Nate Edel
Assignee: Nate Edel
Priority: Minor
 Fix For: 2.0.0


There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
there is a subset of that code in HFileReaderImpl.

Refactored key selection 
Moved both to EncryptionUtil.java
Can't figure out how to write a unit test for this, but there's no new code 
just refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Ashu Pachauri (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051858#comment-15051858
 ] 

Ashu Pachauri commented on HBASE-14953:
---

The core threads do come back when new tasks arrive even if they had terminated 
after a time out. It's just that they will be replaced with new threads. Since 
we don't maintain any thread local state here, I guess that should be okay.

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-10 Thread Zhan Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051856#comment-15051856
 ] 

Zhan Zhang commented on HBASE-14795:


[~ted.m] Thanks for reviewing this. I have updated the reviewboard with context 
completion hook.

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch, 
> HBASE-14795-4.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14795) Enhance the spark-hbase scan operations

2015-12-10 Thread Zhan Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhan Zhang updated HBASE-14795:
---
Attachment: HBASE-14795-4.patch

> Enhance the spark-hbase scan operations
> ---
>
> Key: HBASE-14795
> URL: https://issues.apache.org/jira/browse/HBASE-14795
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Malaska
>Assignee: Zhan Zhang
>Priority: Minor
> Attachments: 
> 0001-HBASE-14795-Enhance-the-spark-hbase-scan-operations.patch, 
> HBASE-14795-1.patch, HBASE-14795-2.patch, HBASE-14795-3.patch, 
> HBASE-14795-4.patch
>
>
> This is a sub-jira of HBASE-14789.  This jira is to focus on the replacement 
> of TableInputFormat for a more custom scan implementation that will make the 
> following use case more effective.
> Use case:
> In the case you have multiple scan ranges on a single table with in a single 
> query.  TableInputFormat will scan the the outer range of the scan start and 
> end range where this implementation can be more pointed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14953) HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits in case of RejectedExecutionException

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051851#comment-15051851
 ] 

Elliott Clark commented on HBASE-14953:
---

I don't think that we want to set allowCoreThreadTimeOut. If the threads go 
away they will never come back. 

> HBaseInterClusterReplicationEndpoint: Do not retry the whole batch of edits 
> in case of RejectedExecutionException
> -
>
> Key: HBASE-14953
> URL: https://issues.apache.org/jira/browse/HBASE-14953
> Project: HBase
>  Issue Type: Bug
>  Components: Replication
>Affects Versions: 2.0.0, 1.2.0, 1.3.0
>Reporter: Ashu Pachauri
>Assignee: Ashu Pachauri
>Priority: Critical
> Attachments: HBASE-14953-V1.patch, HBASE-14953-V2.patch
>
>
> When we have wal provider set to multiwal, the ReplicationSource has multiple 
> worker threads submitting batches to HBaseInterClusterReplicationEndpoint. In 
> such a scenario, it is quite common to encounter RejectedExecutionException 
> because it takes quite long for shipping edits to peer cluster compared to 
> reading edits from source and submitting more batches to the endpoint. 
> The logs are just filled with warnings due to this very exception.
> Since we subdivide batches before actually shipping them, we don't need to 
> fail and resend the whole batch if one of the sub-batches fails with 
> RejectedExecutionException. Rather, we should just retry the failed 
> sub-batches. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-14963:

Attachment: no-stopwatch.txt

> Remove Guava dependency from HBase client code
> --
>
> Key: HBASE-14963
> URL: https://issues.apache.org/jira/browse/HBASE-14963
> Project: HBase
>  Issue Type: Improvement
>  Components: Client
>Reporter: Devaraj Das
>Assignee: Devaraj Das
> Attachments: no-stopwatch.txt
>
>
> We ran into an issue where an application bundled its own Guava (and that 
> happened to be in the classpath first) and HBase's MetaTableLocator threw an 
> exception due to the fact that Stopwatch's constructor wasn't compatible... 
> Might be better to not depend on Stopwatch at all in MetaTableLocator since 
> the functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14963) Remove Guava dependency from HBase client code

2015-12-10 Thread Devaraj Das (JIRA)
Devaraj Das created HBASE-14963:
---

 Summary: Remove Guava dependency from HBase client code
 Key: HBASE-14963
 URL: https://issues.apache.org/jira/browse/HBASE-14963
 Project: HBase
  Issue Type: Improvement
  Components: Client
Reporter: Devaraj Das
Assignee: Devaraj Das


We ran into an issue where an application bundled its own Guava (and that 
happened to be in the classpath first) and HBase's MetaTableLocator threw an 
exception due to the fact that Stopwatch's constructor wasn't compatible... 
Might be better to not depend on Stopwatch at all in MetaTableLocator since the 
functionality is easily doable without.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051834#comment-15051834
 ] 

Elliott Clark commented on HBASE-14962:
---

Bisecting running the test 20x in a row I have seen this fail at 
b378b3459da34a64633fb41aeac157c006756267 . This test seems to have been flakey 
for a while.

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14941) locate_region shell command

2015-12-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051818#comment-15051818
 ] 

Hudson commented on HBASE-14941:


FAILURE: Integrated in HBase-1.3-IT #366 (See 
[https://builds.apache.org/job/HBase-1.3-IT/366/])
HBASE-14941 locate_region shell command (matteo.bertozzi: rev 
2d74dcfadcb216a19b7502590f93cc2b350a7546)
* hbase-shell/src/main/ruby/shell/commands/locate_region.rb
* hbase-shell/src/main/ruby/shell.rb
* hbase-shell/src/main/ruby/hbase/admin.rb


> locate_region shell command
> ---
>
> Key: HBASE-14941
> URL: https://issues.apache.org/jira/browse/HBASE-14941
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Matteo Bertozzi
>Assignee: Matteo Bertozzi
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0
>
> Attachments: HBASE-14941-v1_branch-1.patch, 
> HBASE-14941-v2_branch-1.patch, HBASE-14941_branch-1.patch
>
>
> Sometimes it is helpful to get the region location given a specified key, 
> without having to scan meta and look at the keys.
> so, having in the shell something like:
> {noformat}
> hbase(main):008:0> locate_region 'testtb', 'z'
> HOST REGION   
> 
>  localhost:42006 {ENCODED => 7486fee0129f0e3a3e671fec4a4255d5, 
>   NAME => 
> 'testtb,m,1449508841130.7486fee0129f0e3a3e671fec4a4255d5.',
>   STARTKEY => 'm', ENDKEY => ''}  
> 1 row(s) in 0.0090 seconds
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14769) Remove unused functions and duplicate javadocs from HBaseAdmin

2015-12-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-14769:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master branch.

Thanks for the patch Appy and for the patience.

Please write a release note on what was changed. Include noting the gray area 
that was identified by review and that we allowed in.  Thanks boss.

BTW, what do you call the Appy who works on "H"Base or "H"adoop?  "H"appy!

> Remove unused functions and duplicate javadocs from HBaseAdmin 
> ---
>
> Key: HBASE-14769
> URL: https://issues.apache.org/jira/browse/HBASE-14769
> Project: HBase
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Fix For: 2.0.0
>
> Attachments: HBASE-14769-master-v2.patch, 
> HBASE-14769-master-v3.patch, HBASE-14769-master-v4.patch, 
> HBASE-14769-master-v5.patch, HBASE-14769-master-v6.patch, 
> HBASE-14769-master-v7.patch, HBASE-14769-master-v8.patch, 
> HBASE-14769-master-v9.patch, HBASE-14769-master.patch
>
>
> HBaseAdmin is marked private, so removing the functions not being used 
> anywhere.
> Also, the javadocs of overridden functions are same as corresponding ones in 
> Admin.java. Since javadocs are automatically inherited from the interface 
> class, we can remove these redundant 100s of lines.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14901) There is duplicated code to create/manage encryption keys

2015-12-10 Thread Gary Helmling (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Helmling updated HBASE-14901:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to master.  Thanks for the patch, [~nkedel]!

> There is duplicated code to create/manage encryption keys
> -
>
> Key: HBASE-14901
> URL: https://issues.apache.org/jira/browse/HBASE-14901
> Project: HBase
>  Issue Type: Bug
>  Components: encryption
>Affects Versions: 2.0.0
>Reporter: Nate Edel
>Assignee: Nate Edel
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-14901.1.patch, HBASE-14901.2.patch, 
> HBASE-14901.3.patch, HBASE-14901.5.patch, HBASE-14901.6.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> There is duplicated code from MobUtils.createEncryptionContext in HStore, and 
> there is a subset of that code in HFileReaderImpl.
> Refactored key selection 
> Moved both to EncryptionUtil.java
> Can't figure out how to write a unit test for this, but there's no new code 
> just refactoring.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14962) TestSplitWalDataLoss fails on all branches

2015-12-10 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15051793#comment-15051793
 ] 

Elliott Clark commented on HBASE-14962:
---

I've seen a failure at c831a4522fd14f1012a6d4d0cd9b88d279704244 on branch-1.2. 
So this has been around a while. Though it could have been around before that. 
I just don't have the logs before that.

> TestSplitWalDataLoss fails on all branches
> --
>
> Key: HBASE-14962
> URL: https://issues.apache.org/jira/browse/HBASE-14962
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>
> With some regularity I am seeing: 
> {code}
> org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 
> action: TestSplitWalDataLoss:dataloss: 1 time, 
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1800(AsyncProcess.java:208)
>   at 
> org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1712)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:240)
>   at 
> org.apache.hadoop.hbase.client.BufferedMutatorImpl.flush(BufferedMutatorImpl.java:190)
>   at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:1430)
>   at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1021)
>   at 
> org.apache.hadoop.hbase.regionserver.TestSplitWalDataLoss.test(TestSplitWalDataLoss.java:121)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-12-10 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14865:
-
Status: Patch Available  (was: Open)

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-branch-1.2.patch, 
> HBASE-14865-branch-1.patch, HBASE-14865-branch-1.patch, 
> HBASE-14865-master-v2.patch, HBASE-14865-master-v3.patch, 
> HBASE-14865-master-v4.patch, HBASE-14865-master-v5.patch, 
> HBASE-14865-master-v6.patch, HBASE-14865-master-v7.patch, 
> HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14865) Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection

2015-12-10 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-14865:
-
Status: Open  (was: Patch Available)

> Support passing multiple QOPs to SaslClient/Server via hbase.rpc.protection
> ---
>
> Key: HBASE-14865
> URL: https://issues.apache.org/jira/browse/HBASE-14865
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Appy
>Assignee: Appy
> Attachments: HBASE-14865-branch-1.2.patch, 
> HBASE-14865-branch-1.patch, HBASE-14865-branch-1.patch, 
> HBASE-14865-master-v2.patch, HBASE-14865-master-v3.patch, 
> HBASE-14865-master-v4.patch, HBASE-14865-master-v5.patch, 
> HBASE-14865-master-v6.patch, HBASE-14865-master-v7.patch, 
> HBASE-14865-master.patch
>
>
> Currently, we can set the value of hbase.rpc.protection to one of 
> authentication/integrity/privacy. It is the used to set 
> {{javax.security.sasl.qop}} in SaslUtil.java.
> The problem is, if a cluster wants to switch from one qop to another, it'll 
> have to take a downtime. Rolling upgrade will create a situation where some 
> nodes have old value and some have new, which'll prevent any communication 
> between them. There will be similar issue when clients will try to connect.
> {{javax.security.sasl.qop}} can take in a list of QOP in preferences order. 
> So a transition from qop1 to qop2 can be easily done like this
> "qop1" --> "qop2,qop1" --> rolling restart --> "qop2" --> rolling restart
> Need to change hbase.rpc.protection to accept a list too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >