[jira] [Updated] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Ashish Singhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish Singhi updated HBASE-14154:
--
Attachment: HBASE-14154-v1.patch

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154-v1.patch, HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-07-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647263#comment-14647263
 ] 

ramkrishna.s.vasudevan commented on HBASE-14098:


bq.if (this.conf.getBoolean("hbase.regionserver.compaction.private.readers", 
true)) 
So with this change compactions will always use new readers so that the OS 
pages of those files are not cached based on the new setting that says 
dropBehind on compaction?

> Allow dropping caches behind compactions
> 
>
> Key: HBASE-14098
> URL: https://issues.apache.org/jira/browse/HBASE-14098
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, hadoop2, HFile
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
> HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
> HBASE-14098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14170) [HBase Rest] RESTServer is not shutting down if "hbase.rest.port" Address already in use.

2015-07-29 Thread Y. SREENIVASULU REDDY (JIRA)
Y. SREENIVASULU REDDY created HBASE-14170:
-

 Summary: [HBase Rest] RESTServer is not shutting down if 
"hbase.rest.port" Address already in use.
 Key: HBASE-14170
 URL: https://issues.apache.org/jira/browse/HBASE-14170
 Project: HBase
  Issue Type: Bug
  Components: REST
Reporter: Y. SREENIVASULU REDDY
 Fix For: 2.0.0, 1.0.2, 1.2.0


[HBase Rest] RESTServer is not shutting down if "hbase.rest.port" Address 
already in use.

 If "hbase.rest.port" Address already in use, RESTServer should shutdown,

with out this "hbase.rest.port"  we cant perform any operations on RESTServer. 
Then there is no use of running RESTServer process.

{code}
2015-07-30 11:49:48,273 WARN  [main] mortbay.log: failed 
SelectChannelConnector@0.0.0.0:8080: java.net.BindException: Address already in 
use
2015-07-30 11:49:48,274 WARN  [main] mortbay.log: failed Server@563f38c4: 
java.net.BindException: Address already in use
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647249#comment-14647249
 ] 

Hadoop QA commented on HBASE-14168:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747917/HBASE-14168-001.patch
  against master branch at commit 5f1129c799e9c273dfd58a7fc87d5e654061607b.
  ATTACHMENT ID: 12747917

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestTableInputFormat
  org.apache.hadoop.hbase.mapred.TestTableInputFormat

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.camel.processor.async.AsyncEndpointCustomRoutePolicyTest.testAsyncEndpoint(AsyncEndpointCustomRoutePolicyTest.java:69)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14930//console

This message is automatically generated.

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-29 Thread Francis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-14169:

Status: Patch Available  (was: Open)

> API to refreshSuperUserGroupsConfiguration
> --
>
> Key: HBASE-14169
> URL: https://issues.apache.org/jira/browse/HBASE-14169
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-14169.patch
>
>
> For deployments that use security. User impersonation (AKA doAs()) is needed 
> for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
> definitions are defined in a xml config file and read and cached by the 
> ProxyUsers class. Calling this api will refresh cached information, 
> eliminating the need to restart the master/regionserver whenever the 
> configuration is changed. 
> Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-29 Thread Francis Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Francis Liu updated HBASE-14169:

Attachment: HBASE-14169.patch

> API to refreshSuperUserGroupsConfiguration
> --
>
> Key: HBASE-14169
> URL: https://issues.apache.org/jira/browse/HBASE-14169
> Project: HBase
>  Issue Type: New Feature
>Reporter: Francis Liu
>Assignee: Francis Liu
> Attachments: HBASE-14169.patch
>
>
> For deployments that use security. User impersonation (AKA doAs()) is needed 
> for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
> definitions are defined in a xml config file and read and cached by the 
> ProxyUsers class. Calling this api will refresh cached information, 
> eliminating the need to restart the master/regionserver whenever the 
> configuration is changed. 
> Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14169) API to refreshSuperUserGroupsConfiguration

2015-07-29 Thread Francis Liu (JIRA)
Francis Liu created HBASE-14169:
---

 Summary: API to refreshSuperUserGroupsConfiguration
 Key: HBASE-14169
 URL: https://issues.apache.org/jira/browse/HBASE-14169
 Project: HBase
  Issue Type: New Feature
Reporter: Francis Liu
Assignee: Francis Liu


For deployments that use security. User impersonation (AKA doAs()) is needed 
for some services (ie Stargate, thriftserver, Oozie, etc). Impersonation 
definitions are defined in a xml config file and read and cached by the 
ProxyUsers class. Calling this api will refresh cached information, eliminating 
the need to restart the master/regionserver whenever the configuration is 
changed. 

Implementation just adds another method to AccessControlService.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647207#comment-14647207
 ] 

Hudson commented on HBASE-14155:


SUCCESS: Integrated in HBase-1.3-IT #62 (See 
[https://builds.apache.org/job/HBase-1.3-IT/62/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev 67f4a077b99eed338633c02badfd2d3eab907ef6)
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreF

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647204#comment-14647204
 ] 

Hudson commented on HBASE-14155:


FAILURE: Integrated in HBase-1.3 #80 (See 
[https://builds.apache.org/job/HBase-1.3/80/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev 67f4a077b99eed338633c02badfd2d3eab907ef6)
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileSca

[jira] [Commented] (HBASE-14144) Bloomfilter path to work with Byte buffered cells

2015-07-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647199#comment-14647199
 ] 

ramkrishna.s.vasudevan commented on HBASE-14144:


@Ted
bq.I think FakeByteBufferedCell would be better classname.
I forgot to change this in my upated patch. Sorry about it. I will do it in the 
next patch if any, or on commit.
Any reviews here?

> Bloomfilter path to work with Byte buffered cells
> -
>
> Key: HBASE-14144
> URL: https://issues.apache.org/jira/browse/HBASE-14144
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver, Scanners
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0
>
> Attachments: HBASE-14144.patch, HBASE-14144_1.patch
>
>
> This JIRA is to check if there will be a need to make the bloom filters to 
> work with ByteBuffer cells. During POC this path created lot of duplicated 
> code but considering other refactorings done in this path  may lead to less 
> duplication. This JIRA is a placeholder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647196#comment-14647196
 ] 

Ashish Singhi commented on HBASE-14154:
---

Thanks Andrew. Make sense to me.

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647193#comment-14647193
 ] 

Hudson commented on HBASE-14155:


SUCCESS: Integrated in HBase-1.2-IT #69 (See 
[https://builds.apache.org/job/HBase-1.2-IT/69/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev 4f4bb55a4a83cdf25818edd8780e16ac876bd5a9)
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreF

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647184#comment-14647184
 ] 

Hudson commented on HBASE-14155:


FAILURE: Integrated in HBase-1.1 #593 (See 
[https://builds.apache.org/job/HBase-1.1/593/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev 7735fc79bed918bbfb694bb85605169916cd7251)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileS

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647171#comment-14647171
 ] 

Hudson commented on HBASE-14155:


FAILURE: Integrated in HBase-TRUNK #6687 (See 
[https://builds.apache.org/job/HBase-TRUNK/6687/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan) 
(apurtell: rev 5f1129c799e9c273dfd58a7fc87d5e654061607b)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647142#comment-14647142
 ] 

Hudson commented on HBASE-14155:


FAILURE: Integrated in HBase-1.2 #86 (See 
[https://builds.apache.org/job/HBase-1.2/86/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev 4f4bb55a4a83cdf25818edd8780e16ac876bd5a9)
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileSca

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647135#comment-14647135
 ] 

Hudson commented on HBASE-14155:


FAILURE: Integrated in HBase-1.0 #998 (See 
[https://builds.apache.org/job/HBase-1.0/998/])
HBASE-14155 StackOverflowError in reverse scan (Ramkrishna S. Vasudevan and Ted 
Yu) (apurtell: rev d3d15ca595df602536692b74600830560e800a65)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekBeforeWithReverseScan.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileS

[jira] [Updated] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14168:
---
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.1.2
   1.2.0
   2.0.0

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Fix For: 2.0.0, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647125#comment-14647125
 ] 

Ted Yu commented on HBASE-14168:


Makes sense.

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14086) remove unused bundled dependencies

2015-07-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647120#comment-14647120
 ] 

Sean Busbey commented on HBASE-14086:
-

yes. I suspect we don't need to remove anything in the case of 0.94, but we 
need to build the site to verify if freebsd_docbook.css is used.

> remove unused bundled dependencies
> --
>
> Key: HBASE-14086
> URL: https://issues.apache.org/jira/browse/HBASE-14086
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14086.1.patch
>
>
> We have some files with compatible non-ASL licenses that don't appear to be 
> used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647119#comment-14647119
 ] 

ramkrishna.s.vasudevan commented on HBASE-14155:


Thanks for the reviews, commits and for the branch-1 patch with the updates in 
the HBaseTestingUtility.

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFile

[jira] [Updated] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread zhouyingchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HBASE-14168:
-
Attachment: HBASE-14168-001.patch

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread zhouyingchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HBASE-14168:
-
Assignee: zhouyingchao

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread zhouyingchao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhouyingchao updated HBASE-14168:
-
Status: Patch Available  (was: Open)

> Avoid useless retry as exception implies in TableRecordReaderImpl
> -
>
> Key: HBASE-14168
> URL: https://issues.apache.org/jira/browse/HBASE-14168
> Project: HBase
>  Issue Type: Bug
>  Components: mapreduce
>Reporter: zhouyingchao
>Assignee: zhouyingchao
>Priority: Minor
> Attachments: HBASE-14168-001.patch
>
>
> In TableRecordReaderImpl, even if the next() of scan throws 
> DoNotRetryIOException, it would still be retried. This does not make sense 
> and should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14168) Avoid useless retry as exception implies in TableRecordReaderImpl

2015-07-29 Thread zhouyingchao (JIRA)
zhouyingchao created HBASE-14168:


 Summary: Avoid useless retry as exception implies in 
TableRecordReaderImpl
 Key: HBASE-14168
 URL: https://issues.apache.org/jira/browse/HBASE-14168
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: zhouyingchao
Priority: Minor


In TableRecordReaderImpl, even if the next() of scan throws 
DoNotRetryIOException, it would still be retried. This does not make sense and 
should be avoided.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647118#comment-14647118
 ] 

Hudson commented on HBASE-14153:


FAILURE: Integrated in HBase-1.3 #79 (See 
[https://builds.apache.org/job/HBase-1.3/79/])
HBASE-14153 Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY (Konstantin 
Shvachko) (jerryjch: rev 3d40dd8394cf2da501bf90e4409722ec3ed6c544)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManagerHost.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureManagerHost.java


> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-14147) REST Support for Namespaces

2015-07-29 Thread Matt Warhaftig (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Warhaftig reassigned HBASE-14147:
--

Assignee: Matt Warhaftig

> REST Support for Namespaces
> ---
>
> Key: HBASE-14147
> URL: https://issues.apache.org/jira/browse/HBASE-14147
> Project: HBase
>  Issue Type: Sub-task
>  Components: REST
>Affects Versions: 1.1.1
>Reporter: Rick Kellogg
>Assignee: Matt Warhaftig
>Priority: Minor
>
> Expand REST services to include addition features:
> * Create namespace
> * Alter namespace
> * Describe namespace
> * Drop namespace
> * List tables in a specific namespace
> * List all namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14147) REST Support for Namespaces

2015-07-29 Thread Matt Warhaftig (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647109#comment-14647109
 ] 

Matt Warhaftig commented on HBASE-14147:


I am going to grab this ticket unless there are objectors.  Look relatively 
straight forward - ETA of 7/9.

> REST Support for Namespaces
> ---
>
> Key: HBASE-14147
> URL: https://issues.apache.org/jira/browse/HBASE-14147
> Project: HBase
>  Issue Type: Sub-task
>  Components: REST
>Affects Versions: 1.1.1
>Reporter: Rick Kellogg
>Priority: Minor
>
> Expand REST services to include addition features:
> * Create namespace
> * Alter namespace
> * Describe namespace
> * Drop namespace
> * List tables in a specific namespace
> * List all namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14086) remove unused bundled dependencies

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647064#comment-14647064
 ] 

Andrew Purtell commented on HBASE-14086:


So we're only not finished with 0.94 here? 

> remove unused bundled dependencies
> --
>
> Key: HBASE-14086
> URL: https://issues.apache.org/jira/browse/HBASE-14086
> Project: HBase
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14086.1.patch
>
>
> We have some files with compatible non-ASL licenses that don't appear to be 
> used, so remove them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13825) Get operations on large objects fail with protocol errors

2015-07-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-13825:
---
 Assignee: Andrew Purtell
Fix Version/s: 1.3.0
   1.1.2
   1.2.0
   1.0.2
   0.98.14
   2.0.0

> Get operations on large objects fail with protocol errors
> -
>
> Key: HBASE-13825
> URL: https://issues.apache.org/jira/browse/HBASE-13825
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.0, 1.0.1
>Reporter: Dev Lakhani
>Assignee: Andrew Purtell
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
>
> When performing a get operation on a column family with more than 64MB of 
> data, the operation fails with:
> Caused by: Portable(java.io.IOException): Call to host:port failed on local 
> exception: com.google.protobuf.InvalidProtocolBufferException: Protocol 
> message was too large.  May be malicious.  Use 
> CodedInputStream.setSizeLimit() to increase the size limit.
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.wrapException(RpcClient.java:1481)
> at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1453)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1653)
> at 
> org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1711)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:27308)
> at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.get(ProtobufUtil.java:1381)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:753)
> at org.apache.hadoop.hbase.client.HTable$3.call(HTable.java:751)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:120)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:756)
> at org.apache.hadoop.hbase.client.HTable.get(HTable.java:765)
> at 
> org.apache.hadoop.hbase.client.HTablePool$PooledHTable.get(HTablePool.java:395)
> This may be related to https://issues.apache.org/jira/browse/HBASE-11747 but 
> that issue is related to cluster status. 
> Scan and put operations on the same data work fine
> Tested on a 1.0.0 cluster with both 1.0.1 and 1.0.0 clients.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-14155:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.1.2
   1.2.0
   1.0.2
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.0 and up. Fix looks right but I double checked on each 
branch. Thanks for the patch [~ram_krish] and [~tedyu] for the branch-1 port.

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Fix For: 2.0.0, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>  

[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647040#comment-14647040
 ] 

Hudson commented on HBASE-14164:


FAILURE: Integrated in HBase-TRUNK #6686 (See 
[https://builds.apache.org/job/HBase-TRUNK/6686/])
HBASE-14164 Display primary region replicas distribution on table.jsp (tedyu: 
rev 9f4aeca7c84d3e0c0b2067275e04c6c29ace948b)
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647039#comment-14647039
 ] 

Hudson commented on HBASE-14153:


FAILURE: Integrated in HBase-TRUNK #6686 (See 
[https://builds.apache.org/job/HBase-TRUNK/6686/])
HBASE-14153 Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY (Konstantin 
Shvachko) (jerryjch: rev e5bf0287e82bbb293cb835a67cad9d15333584b0)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManagerHost.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureManagerHost.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java


> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14167) hbase-spark integration tests do not respect -DskipITs

2015-07-29 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-14167:
--

 Summary: hbase-spark integration tests do not respect -DskipITs
 Key: HBASE-14167
 URL: https://issues.apache.org/jira/browse/HBASE-14167
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0
Reporter: Andrew Purtell
Priority: Minor


When running a build with {{mvn ... -DskipITs}}, the hbase-spark module's 
integration tests do not respect the flag and run anyway. Fix. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647021#comment-14647021
 ] 

Hudson commented on HBASE-14164:


SUCCESS: Integrated in HBase-1.3-IT #61 (See 
[https://builds.apache.org/job/HBase-1.3-IT/61/])
HBASE-14164 Display primary region replicas distribution on table.jsp (tedyu: 
rev 93f53e263f1de5b552d84326bf58712f82f57d33)
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647020#comment-14647020
 ] 

Hudson commented on HBASE-14153:


SUCCESS: Integrated in HBase-1.3-IT #61 (See 
[https://builds.apache.org/job/HBase-1.3-IT/61/])
HBASE-14153 Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY (Konstantin 
Shvachko) (jerryjch: rev 3d40dd8394cf2da501bf90e4409722ec3ed6c544)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManagerHost.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureManagerHost.java


> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647015#comment-14647015
 ] 

Andrew Purtell edited comment on HBASE-14154 at 7/30/15 1:12 AM:
-

Thanks [~ashish singhi]. 

When getting ready to commit this I did find one nit I missed on the first skim.

In HColumnDescriptor.java, {{getDFSReplication}} should return a short integer 
don't you think? Currently it's a string and that is not symmetric with 
{{setDFSReplication(short)}}. Instead of returning null if not set, return 0? 
replication=0 makes no sense, that's a reasonable indication of 'not set'. 
{code}
+  /** Return the replication factor for the family, or null if not set */
+  public String getDFSReplication() {
+return getValue(DFS_REPLICATION);
+  }
{code}

If you agree, no need to make another set of patches, I can make that 
modification at commit time and fix up the callers. 


was (Author: apurtell):
Thanks [~ashish singhi]. 

When getting ready to commit this I did find one nit I missed on the first skim.

In HColumnDescriptor.java, this should return a short integer don't you think? 
Currently it's a string and that is not symmetric with 
{{setDFSReplication(short)}}. Instead of returning null if not set, return 0? 
replication=0 makes no sense, that's a reasonable indication of 'not set'. 
{code}
+  /** Return the replication factor for the family, or null if not set */
+  public String getDFSReplication() {
+return getValue(DFS_REPLICATION);
+  }
{code}

If you agree, no need to make another set of patches, I can make that 
modification at commit time and fix up the callers. 

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647015#comment-14647015
 ] 

Andrew Purtell commented on HBASE-14154:


Thanks [~ashish singhi]. 

When getting ready to commit this I did find one nit I missed on the first skim.

In HColumnDescriptor.java, this should return a short integer don't you think? 
Currently it's a string and that is not symmetric with 
{{setDFSReplication(short)}}. Instead of returning null if not set, return 0? 
replication=0 makes no sense, that's a reasonable indication of 'not set'. 
{code}
+  /** Return the replication factor for the family, or null if not set */
+  public String getDFSReplication() {
+return getValue(DFS_REPLICATION);
+  }
{code}

If you agree, no need to make another set of patches, I can make that 
modification at commit time and fix up the callers. 

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647011#comment-14647011
 ] 

Hudson commented on HBASE-14153:


SUCCESS: Integrated in HBase-1.2-IT #68 (See 
[https://builds.apache.org/job/HBase-1.2-IT/68/])
HBASE-14153 Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY (Konstantin 
Shvachko) (jerryjch: rev 707fba5e0c5eb250210bb6963de36112c09ea3cd)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureManagerHost.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManagerHost.java


> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14647000#comment-14647000
 ] 

Hudson commented on HBASE-14153:


FAILURE: Integrated in HBase-1.2 #85 (See 
[https://builds.apache.org/job/HBase-1.2/85/])
HBASE-14153 Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY (Konstantin 
Shvachko) (jerryjch: rev 707fba5e0c5eb250210bb6963de36112c09ea3cd)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/ProcedureManagerHost.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/procedure/TestProcedureManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/procedure/MasterProcedureManagerHost.java


> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646999#comment-14646999
 ] 

Hudson commented on HBASE-14164:


FAILURE: Integrated in HBase-1.3 #78 (See 
[https://builds.apache.org/job/HBase-1.3/78/])
HBASE-14164 Display primary region replicas distribution on table.jsp (tedyu: 
rev 93f53e263f1de5b552d84326bf58712f82f57d33)
* hbase-server/src/main/resources/hbase-webapps/master/table.jsp


> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646900#comment-14646900
 ] 

Andrew Purtell commented on HBASE-14087:


I'll get to them all boss

> ensure correct ASF policy compliant headers on source/docs
> --
>
> Key: HBASE-14087
> URL: https://issues.apache.org/jira/browse/HBASE-14087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
> HBASE-14087.2.patch
>
>
> * we have a couple of files that are missing their headers.
> * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646888#comment-14646888
 ] 

Hudson commented on HBASE-14087:


FAILURE: Integrated in HBase-TRUNK #6685 (See 
[https://builds.apache.org/job/HBase-TRUNK/6685/])
HBASE-14087 Ensure correct ASF headers for docs/code (busbey: rev 
4ce6f486d063553a78ed1d60670e68564d61a483)
* dev-support/test-patch.sh
* hbase-native-client/cmake_modules/FindGTest.cmake
* src/main/site/asciidoc/sponsors.adoc
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionObserver.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/BaseRegionServerObserver.java
* dev-support/jdiffHBasePublicAPI.sh
* conf/log4j.properties
* bin/hbase-config.sh
* hbase-server/src/main/java/org/apache/hadoop/hbase/HealthCheckChore.java
* conf/hbase-env.sh
* hbase-client/src/test/java/org/apache/hadoop/hbase/zookeeper/TestZKUtil.java
* src/main/site/asciidoc/index.adoc
* bin/stop-hbase.sh
* src/main/site/xdoc/old_news.xml
* bin/master-backup.sh
* hbase-rest/src/test/java/org/apache/hadoop/hbase/rest/model/TestModelBase.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/util/AbstractByteRange.java
* src/main/site/xdoc/metrics.xml
* hbase-shell/src/main/ruby/shell/commands/enable_table_replication.rb
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestFuzzyRowAndColumnRangeFilter.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/JarFinder.java
* hbase-examples/src/main/php/DemoClient.php
* src/main/site/asciidoc/bulk-loads.adoc
* pom.xml
* src/main/site/asciidoc/acid-semantics.adoc
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimpleMutableByteRange.java
* bin/considerAsDead.sh
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RetriesExhaustedException.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/DroppedSnapshotException.java
* bin/hbase
* hbase-examples/src/main/cpp/Makefile
* dev-support/hbase_jdiff_acrossSingularityTemplate.xml
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/RegionObserver.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestNullComparator.java
* bin/start-hbase.sh
* dev-support/publish_hbase_website.sh
* hbase-native-client/src/sync/CMakeLists.txt
* dev-support/test-util.sh
* src/main/site/asciidoc/resources.adoc
* bin/zookeepers.sh
* hbase-client/src/main/java/org/apache/hadoop/hbase/TableExistsException.java
* dev-support/jenkinsEnv.sh
* hbase-examples/src/main/perl/DemoClient.pl
* hbase-shell/src/main/ruby/shell/commands/disable_table_replication.rb
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationLoad.java
* dev-support/rebase_all_git_branches.sh
* bin/rolling-restart.sh
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/SimplePositionedMutableByteRange.java
* hbase-native-client/src/core/CMakeLists.txt
* src/main/site/asciidoc/metrics.adoc
* bin/regionservers.sh
* 
hbase-thrift/src/main/java/org/apache/hadoop/hbase/thrift/HttpAuthenticationException.java
* hbase-native-client/README.md
* 
hbase-server/src/test/resources/org/apache/hadoop/hbase/PerformanceEvaluation_Counter.properties
* src/main/site/asciidoc/cygwin.adoc
* src/main/site/xdoc/sponsors.xml
* src/main/site/xdoc/pseudo-distributed.xml
* src/main/site/xdoc/cygwin.xml
* bin/local-master-backup.sh
* src/main/site/xdoc/export_control.xml
* bin/hbase-daemon.sh
* src/main/site/xdoc/bulk-loads.xml
* src/main/site/xdoc/acid-semantics.xml
* hbase-native-client/src/rpc/CMakeLists.txt
* dev-support/jdiffHBasePublicAPI_common.sh
* src/main/site/asciidoc/old_news.adoc
* dev-support/hbase_docker/README.md
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestBitComparator.java
* bin/graceful_stop.sh
* hbase-native-client/CMakeLists.txt
* hbase-native-client/cmake_modules/FindLibEv.cmake
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/coprocessor/EndpointObserver.java
* bin/local-regionservers.sh
* hbase-native-client/src/async/CMakeLists.txt
* src/main/site/xdoc/replication.xml
* hbase-client/src/main/java/org/apache/hadoop/hbase/Coprocessor.java
* hbase-common/src/main/java/org/apache/hadoop/hbase/io/LimitInputStream.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/HealthChecker.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/util/ProtoUtil.java
* hbase-examples/src/main/cpp/DemoClient.cpp
* conf/hadoop-metrics2-hbase.properties
* hbase-client/src/main/java/org/apache/hadoop/hbase/CoprocessorEnvironment.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java
* src/main/site/asciidoc/export_control.adoc
* src/main/site/xdoc/resources.xml
* src/main/site/xdoc/index.xml
* 
hbase-client/src/main/resources/META-INF/services/org.apache.hadoop.security.token.TokenIdentifier
* src/main/site/asciidoc/replication.adoc
* bin/hbase-daemons.sh
* src/main/site/asc

[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646882#comment-14646882
 ] 

Jerry He commented on HBASE-14153:
--

+1

Committed to master, branch-1 and branch-1.2.

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14153:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   1.2.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Fix For: 2.0.0, 1.2.0, 1.3.0
>
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646878#comment-14646878
 ] 

Hadoop QA commented on HBASE-14155:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747847/14155-branch-1.txt
  against branch-1 branch at commit 4ce6f486d063553a78ed1d60670e68564d61a483.
  ATTACHMENT ID: 12747847

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//console

This message is automatically generated.

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSR

[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646877#comment-14646877
 ] 

Jerry He commented on HBASE-14153:
--

Thanks for the patch.

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14166) Per-Region metrics can be stale

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646855#comment-14646855
 ] 

Hadoop QA commented on HBASE-14166:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747864/HBASE-14166.patch
  against master branch at commit 9f4aeca7c84d3e0c0b2067275e04c6c29ace948b.
  ATTACHMENT ID: 12747864

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified tests.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail with Hadoop version 2.4.0.

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DefaultMetricsSystemHelper.java:[8,34]
 cannot find symbol
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.2:compile (default-compile) on 
project hbase-hadoop2-compat: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/hbase-hadoop2-compat/src/main/java/org/apache/hadoop/metrics2/lib/DefaultMetricsSystemHelper.java:[8,34]
 cannot find symbol
[ERROR] symbol:   method removeObjectName(java.lang.String)
[ERROR] location: variable INSTANCE of type 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hbase-hadoop2-compat


Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14929//console

This message is automatically generated.

> Per-Region metrics can be stale
> ---
>
> Key: HBASE-14166
> URL: https://issues.apache.org/jira/browse/HBASE-14166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14166.patch
>
>
> We're seeing some machines that are reporting only old region metrics. It 
> seems like at some point the Hadoop metrics system decided which metrics to 
> display and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14166) Per-Region metrics can be stale

2015-07-29 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14166:
--
Fix Version/s: (was: 1.2.0)
   2.0.0

> Per-Region metrics can be stale
> ---
>
> Key: HBASE-14166
> URL: https://issues.apache.org/jira/browse/HBASE-14166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14166.patch
>
>
> We're seeing some machines that are reporting only old region metrics. It 
> seems like at some point the Hadoop metrics system decided which metrics to 
> display and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14166) Per-Region metrics can be stale

2015-07-29 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14166:
--
Fix Version/s: 1.3.0
   1.2.0
Affects Version/s: 1.1.0.1
   Status: Patch Available  (was: Open)

> Per-Region metrics can be stale
> ---
>
> Key: HBASE-14166
> URL: https://issues.apache.org/jira/browse/HBASE-14166
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.0.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 1.2.0, 1.3.0
>
> Attachments: HBASE-14166.patch
>
>
> We're seeing some machines that are reporting only old region metrics. It 
> seems like at some point the Hadoop metrics system decided which metrics to 
> display and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14166) Per-Region metrics can be stale

2015-07-29 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-14166:
--
Attachment: HBASE-14166.patch

> Per-Region metrics can be stale
> ---
>
> Key: HBASE-14166
> URL: https://issues.apache.org/jira/browse/HBASE-14166
> Project: HBase
>  Issue Type: Bug
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HBASE-14166.patch
>
>
> We're seeing some machines that are reporting only old region metrics. It 
> seems like at some point the Hadoop metrics system decided which metrics to 
> display and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14164:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Fix For: 2.0.0, 1.3.0
>
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646786#comment-14646786
 ] 

Hadoop QA commented on HBASE-14153:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747468/HBASE-14153.patch
  against master branch at commit 05de2ec5801fbba4577fb363f858a6e6f282c104.
  ATTACHMENT ID: 12747468

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14927//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14927//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14927//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14927//console

This message is automatically generated.

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646780#comment-14646780
 ] 

Ted Yu commented on HBASE-14164:


TestWALProcedureStoreOnHDFS has been flaky for a while.

Change to UI is not related to snapshot test failure.

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646766#comment-14646766
 ] 

Hadoop QA commented on HBASE-14164:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747825/14164-v3.txt
  against master branch at commit 05de2ec5801fbba4577fb363f858a6e6f282c104.
  ATTACHMENT ID: 12747825

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.master.procedure.TestWALProcedureStoreOnHDFS

 {color:red}-1 core zombie tests{color}.  There are 5 zombie test(s):   
at 
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient.testCloneSnapshot(TestRestoreFlushSnapshotFromClient.java:179)
at 
org.apache.hadoop.hbase.snapshot.TestRestoreFlushSnapshotFromClient.testCloneSnapshot(TestRestoreFlushSnapshotFromClient.java:173)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:287)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:261)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testEmptyExportFileSystemState(TestExportSnapshot.java:205)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:287)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testExportFileSystemState(TestExportSnapshot.java:261)
at 
org.apache.hadoop.hbase.snapshot.TestExportSnapshot.testEmptyExportFileSystemState(TestExportSnapshot.java:205)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14926//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14926//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14926//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14926//console

This message is automatically generated.

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14166) Per-Region metrics can be stale

2015-07-29 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-14166:
-

 Summary: Per-Region metrics can be stale
 Key: HBASE-14166
 URL: https://issues.apache.org/jira/browse/HBASE-14166
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark


We're seeing some machines that are reporting only old region metrics. It seems 
like at some point the Hadoop metrics system decided which metrics to display 
and which not to. From then on it was not changing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646654#comment-14646654
 ] 

Sean Busbey commented on HBASE-14087:
-

master version pushed. [~apurtell] let me know which, if any, backports you're 
working on and I'll pick these up again once I've finished HBASE-14085 for 
master.

> ensure correct ASF policy compliant headers on source/docs
> --
>
> Key: HBASE-14087
> URL: https://issues.apache.org/jira/browse/HBASE-14087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
> HBASE-14087.2.patch
>
>
> * we have a couple of files that are missing their headers.
> * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14155:
---
Attachment: 14155-branch-1.txt

Patch for branch-1 based on Ram's patch.


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: 14155-branch-1.txt, HBASE-14155.patch, 
> ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> {code}
> I've attempted to reproduce this in a standalone HBase unit test, but have 
> not been able to (but I'll attach my attempt which mimics what Phoeni

[jira] [Updated] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14153:
-
Status: Patch Available  (was: Open)

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-14153:
-
Status: Open  (was: Patch Available)

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646636#comment-14646636
 ] 

Hadoop QA commented on HBASE-12751:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12747804/HBASE-12751-v18.patch
  against master branch at commit 05de2ec5801fbba4577fb363f858a6e6f282c104.
  ATTACHMENT ID: 12747804

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 106 
new or modified tests.

{color:red}-1 Anti-pattern{color}.  The patch appears to 
have anti-pattern where BYTES_COMPARATOR was omitted:
 -getRegionInfo(), -1, new TreeMap>());.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:red}-1 javac{color}.  The applied patch generated 27 javac compiler 
warnings (more than the master's current 26 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 4 
warning messages.

{color:red}-1 checkstyle{color}.  The applied patch generated 
1872 checkstyle errors (more than the master's current 1864 errors).

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  final long now, List clusterIds, long nonceGroup, long nonce, 
MultiVersionConsistencyControl mvcc) {
+  long logSeqNum, final long now, List clusterIds, long nonceGroup, 
long nonce, MultiVersionConsistencyControl mvcc) {
+  long txid = log.append(htd, hri, new WALKey(hri.getEncodedNameAsBytes(), 
hri.getTable(), now, mvcc),
+new WALKey(info.getEncodedNameAsBytes(), htd.getTableName(), 
System.currentTimeMillis(), mvcc),
+new WALKey(hri.getEncodedNameAsBytes(), htd.getTableName(), 
System.currentTimeMillis(), mvcc),
+final WALKey logkey = new WALKey(hri.getEncodedNameAsBytes(), 
hri.getTable(), now, mvcc);

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.regionserver.TestRegionReplicaFailover
  org.apache.hadoop.hbase.regionserver.TestPerColumnFamilyFlush

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.hadoop.hbase.TestIOFencing.testFencingAroundCompaction(TestIOFencing.java:229)
at 
org.apache.hadoop.hbase.security.visibility.TestVisibilityLabelsWithDefaultVisLabelService.testAddLabels(TestVisibilityLabelsWithDefaultVisLabelService.java:110)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14925//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14925//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14925//artifact/patchprocess/checkstyle-aggregate.html

Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14925//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14925//console

This message is automatically generated.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
> HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, 
> HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, 
> HBASE-12751-v2.patch, HBASE-12751-v3.patch, HBASE-12751-v4.patch, 
> HBASE-12751-v5.patch, HBASE-12751-v6.patch, HBASE-12751-v7.patch, 
> HBASE-12751-v8.patch, HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios

[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-29 Thread Michael Segel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646617#comment-14646617
 ] 

Michael Segel  commented on HBASE-12853:


@Anoop,

Yes, that is correct. 
It was my misunderstanding on the client/server break. 
(I program to the APISs and don't look at the source code.) 

I believe I did mention this after your last post correcting my mistake.

Again, this is pretty simple... you're overloading the scan() so that it first 
does a check to see if the underlying table is bucketed or not.  A simple way 
to do this is to check the number of buckets. If its 0, then its not bucketed 
and you just run the scan like normal.  If it is a non-negative, non-zero 
integer, you would then parallelize the scan.

You would then need to wait until all of the result sets return before you can 
funnel the data in to a single result set to be returned to the user. 

Of course I'm assuming that each result set will start to send back results 
prior to completion of the ensuing scan. 
Note too that these will be range scans. 

One other side effect is that if the scan is a full table scan... things will 
get a bit messy. (We'll maybe not... )

> distributed write pattern to replace ad hoc 'salting'
> -
>
> Key: HBASE-12853
> URL: https://issues.apache.org/jira/browse/HBASE-12853
> Project: HBase
>  Issue Type: New Feature
>Reporter: Michael Segel 
> Fix For: 2.0.0
>
>
> In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
> that while 'salting' alleviated  regional hot spotting, it increased the 
> complexity required to utilize the data.  
> Through the use of coprocessors, it should be possible to offer a method 
> which distributes the data on write across the cluster and then manages 
> reading the data returning a sort ordered result set, abstracting the 
> underlying process. 
> On table creation, a flag is set to indicate that this is a parallel table. 
> On insert in to the table, if the flag is set to true then a prefix is added 
> to the key.  e.g. - or  server # is an integer between 1 and the number of region servers defined.  
> On read (scan) for each region server defined, a separate scan is created 
> adding the prefix. Since each scan will be in sort order, its possible to 
> strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646603#comment-14646603
 ] 

Jerry He commented on HBASE-14164:
--

Got it. 
Helpful improvement.

+1

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646587#comment-14646587
 ] 

Ted Yu commented on HBASE-14164:


>From master UI, you can click on any table.
If the table has REGION_REPLICATION greater than 1, you would see the new 
column.

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646574#comment-14646574
 ] 

Jerry He commented on HBASE-14164:
--

Hi, Ted

How do I navigate to the screen shot from my master-status UI page?

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14153) Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY

2015-07-29 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HBASE-14153:

Assignee: Konstantin Shvachko
  Status: Patch Available  (was: Open)

> Typo in ProcedureManagerHost.MASTER_PROCEUDRE_CONF_KEY
> --
>
> Key: HBASE-14153
> URL: https://issues.apache.org/jira/browse/HBASE-14153
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.0.1
>Reporter: Konstantin Shvachko
>Assignee: Konstantin Shvachko
>Priority: Trivial
> Attachments: HBASE-14153.patch
>
>
> The constant should read {{PROCE _*DU*_ RE}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646548#comment-14646548
 ] 

Andrew Purtell commented on HBASE-14087:


I think this can be committed

> ensure correct ASF policy compliant headers on source/docs
> --
>
> Key: HBASE-14087
> URL: https://issues.apache.org/jira/browse/HBASE-14087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
> HBASE-14087.2.patch
>
>
> * we have a couple of files that are missing their headers.
> * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14164) Display primary region replicas distribution on table.jsp

2015-07-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-14164:
---
Attachment: 14164-v3.txt

> Display primary region replicas distribution on table.jsp
> -
>
> Key: HBASE-14164
> URL: https://issues.apache.org/jira/browse/HBASE-14164
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Minor
> Attachments: 14164-v1.txt, 14164-v2.txt, 14164-v2.txt, 14164-v3.txt, 
> table-with-primary.png
>
>
> While working on HBASE-14110, I enhanced table.jsp with display of primary 
> region replicas across region servers.
> This gives user clear idea on the distribution of primary replicas.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646411#comment-14646411
 ] 

Anoop Sam John edited comment on HBASE-12853 at 7/29/15 5:45 PM:
-

As per the discussion in the Jira comments, we can not do this as a server side 
feature. This will be a client side thing.  Priority can be marked minor or 
major that is not the main thing IMHO.  What matters is a small doc abt the 
approach and patch. Many of us will be happy to review that when it comes. As 
far as a feature is value added for the team,we all are open for those.   Are 
you going to work on this and give patch?   If not there is no point in keeping 
jira open.  We can see any one else willing to take this up. If none better 
close it as later/wont implement.  


was (Author: anoop.hbase):
As per the discussion in the Jira comments, we can not do this as a server side 
feature. This will be a client side thing.  Priority can be marked minor or 
major that is not the main thing IMHO.  What matters is the a small doc abt the 
approach and patch. Many of us will be happy to review that when it comes. As 
far as some feature are value added for the team,we all are open for those.   
Are you going to work on this?   If not there is no point in keeping it open.  
We can see any one else willing to take this up. If none better close it as 
later/wont implement.  

> distributed write pattern to replace ad hoc 'salting'
> -
>
> Key: HBASE-12853
> URL: https://issues.apache.org/jira/browse/HBASE-12853
> Project: HBase
>  Issue Type: New Feature
>Reporter: Michael Segel 
> Fix For: 2.0.0
>
>
> In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
> that while 'salting' alleviated  regional hot spotting, it increased the 
> complexity required to utilize the data.  
> Through the use of coprocessors, it should be possible to offer a method 
> which distributes the data on write across the cluster and then manages 
> reading the data returning a sort ordered result set, abstracting the 
> underlying process. 
> On table creation, a flag is set to indicate that this is a parallel table. 
> On insert in to the table, if the flag is set to true then a prefix is added 
> to the key.  e.g. - or  server # is an integer between 1 and the number of region servers defined.  
> On read (scan) for each region server defined, a separate scan is created 
> adding the prefix. Since each scan will be in sort order, its possible to 
> strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646481#comment-14646481
 ] 

Anoop Sam John commented on HBASE-14155:


+1

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: HBASE-14155.patch, ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> {code}
> I've attempted to reproduce this in a standalone HBase unit test, but have 
> not been able to (but I'll attach my attempt which mimics what Phoenix is 
> doing).



--
Th

[jira] [Commented] (HBASE-8778) Region assigments scan table directory making them slow for huge tables

2015-07-29 Thread Lars Francke (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646464#comment-14646464
 ] 

Lars Francke commented on HBASE-8778:
-

I know this has been long closed but it introduced the 
FSTableDescriptorMigrationToSubdir class which handles the migration from the 
old to the new style.

The comment says it "will be removed for the major release after 0.96".

Are you okay with this being removed now?

If so any suggestions on how to handle this now:
{code}
// Make sure the meta region directory exists!
if (!FSUtils.metaRegionExists(fs, rd)) {
  bootstrap(rd, c);
} else {
  // Migrate table descriptor files if necessary
  org.apache.hadoop.hbase.util.FSTableDescriptorMigrationToSubdir
.migrateFSTableDescriptorsIfNecessary(fs, rd);
}
{code}

I'll create a new JIRA when/if you think it's time to remove this now.

> Region assigments scan table directory making them slow for huge tables
> ---
>
> Key: HBASE-8778
> URL: https://issues.apache.org/jira/browse/HBASE-8778
> Project: HBase
>  Issue Type: Improvement
>Reporter: Dave Latham
>Assignee: Dave Latham
>Priority: Critical
> Fix For: 0.98.0, 0.95.2
>
> Attachments: 8778-dirmodtime.txt, HBASE-8778-0.94.5-v2.patch, 
> HBASE-8778-0.94.5.patch, HBASE-8778-v2.patch, HBASE-8778-v3.patch, 
> HBASE-8778-v4.patch, HBASE-8778-v5.patch, HBASE-8778.patch
>
>
> On a table with 130k regions it takes about 3 seconds for a region server to 
> open a region once it has been assigned.
> Watching the threads for a region server running 0.94.5 that is opening many 
> such regions shows the thread opening the reigon in code like this:
> {noformat}
> "PRI IPC Server handler 4 on 60020" daemon prio=10 tid=0x2aaac07e9000 
> nid=0x6566 runnable [0x4c46d000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.String.indexOf(String.java:1521)
> at java.net.URI$Parser.scan(URI.java:2912)
> at java.net.URI$Parser.parse(URI.java:3004)
> at java.net.URI.(URI.java:736)
> at org.apache.hadoop.fs.Path.initialize(Path.java:145)
> at org.apache.hadoop.fs.Path.(Path.java:126)
> at org.apache.hadoop.fs.Path.(Path.java:50)
> at 
> org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:215)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.makeQualified(DistributedFileSystem.java:252)
> at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:311)
> at 
> org.apache.hadoop.fs.FilterFileSystem.listStatus(FilterFileSystem.java:159)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:842)
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:867)
> at org.apache.hadoop.hbase.util.FSUtils.listStatus(FSUtils.java:1168)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:269)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoPath(FSTableDescriptors.java:255)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.getTableInfoModtime(FSTableDescriptors.java:368)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:155)
> at 
> org.apache.hadoop.hbase.util.FSTableDescriptors.get(FSTableDescriptors.java:126)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:2834)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.openRegion(HRegionServer.java:2807)
> at sun.reflect.GeneratedMethodAccessor64.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at 
> org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320)
> at 
> org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
> {noformat}
> To open the region, the region server first loads the latest 
> HTableDescriptor.  Since HBASE-4553 HTableDescriptor's are stored in the file 
> system at "/hbase//.tableinfo.".  The file with the 
> largest sequenceNum is the current descriptor.  This is done so that the 
> current descirptor is updated atomically.  However, since the filename is not 
> known in advance FSTableDescriptors it has to do a FileSystem.listStatus 
> operation which has to list all files in the directory to find it.  The 
> directory also contains all the region directories, so in our case it has to 
> load 130k FileStatus objects.  Even using a globStatus matching function 
> still transfers all the objects to the client be

[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646411#comment-14646411
 ] 

Anoop Sam John commented on HBASE-12853:


As per the discussion in the Jira comments, we can not do this as a server side 
feature. This will be a client side thing.  Priority can be marked minor or 
major that is not the main thing IMHO.  What matters is the a small doc abt the 
approach and patch. Many of us will be happy to review that when it comes. As 
far as some feature are value added for the team,we all are open for those.   
Are you going to work on this?   If not there is no point in keeping it open.  
We can see any one else willing to take this up. If none better close it as 
later/wont implement.  

> distributed write pattern to replace ad hoc 'salting'
> -
>
> Key: HBASE-12853
> URL: https://issues.apache.org/jira/browse/HBASE-12853
> Project: HBase
>  Issue Type: New Feature
>Reporter: Michael Segel 
> Fix For: 2.0.0
>
>
> In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
> that while 'salting' alleviated  regional hot spotting, it increased the 
> complexity required to utilize the data.  
> Through the use of coprocessors, it should be possible to offer a method 
> which distributes the data on write across the cluster and then manages 
> reading the data returning a sort ordered result set, abstracting the 
> underlying process. 
> On table creation, a flag is set to indicate that this is a parallel table. 
> On insert in to the table, if the flag is set to true then a prefix is added 
> to the key.  e.g. - or  server # is an integer between 1 and the number of region servers defined.  
> On read (scan) for each region server defined, a separate scan is created 
> adding the prefix. Since each scan will be in sort order, its possible to 
> strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14098) Allow dropping caches behind compactions

2015-07-29 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646409#comment-14646409
 ] 

Elliott Clark commented on HBASE-14098:
---

Ping? This one solved kernel stalls for us.

> Allow dropping caches behind compactions
> 
>
> Key: HBASE-14098
> URL: https://issues.apache.org/jira/browse/HBASE-14098
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction, hadoop2, HFile
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-14098-v1.patch, HBASE-14098-v2.patch, 
> HBASE-14098-v3.patch, HBASE-14098-v4.patch, HBASE-14098-v5.patch, 
> HBASE-14098.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646406#comment-14646406
 ] 

Andrew Purtell commented on HBASE-14155:


Sure I can look at this. Patch lgtm at first glance. Let me check the new unit 
test reliably reproduces the issue when the fix is not applied and a couple of 
other things. Assuming that checks out, I will commit this to the affected 
branches.


> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: HBASE-14155.patch, ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.

[jira] [Commented] (HBASE-12751) Allow RowLock to be reader writer

2015-07-29 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646354#comment-14646354
 ] 

Elliott Clark commented on HBASE-12751:
---

I'm still working on the last two test failures. Everything seems fine except 
for the regionservers won't go down. That causes the cluster to stay up and the 
test to timeout.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
> HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, 
> HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, 
> HBASE-12751-v2.patch, HBASE-12751-v3.patch, HBASE-12751-v4.patch, 
> HBASE-12751-v5.patch, HBASE-12751-v6.patch, HBASE-12751-v7.patch, 
> HBASE-12751-v8.patch, HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12751) Allow RowLock to be reader writer

2015-07-29 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12751:
--
Attachment: HBASE-12751-v18.patch

Here's renaming MVCC's methods to more closely show that they are all about 
read and write points and less about the memstore.

Also some code formatting.

> Allow RowLock to be reader writer
> -
>
> Key: HBASE-12751
> URL: https://issues.apache.org/jira/browse/HBASE-12751
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver
>Affects Versions: 2.0.0, 1.3.0
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Fix For: 2.0.0, 1.3.0
>
> Attachments: HBASE-12751-v1.patch, HBASE-12751-v10.patch, 
> HBASE-12751-v10.patch, HBASE-12751-v11.patch, HBASE-12751-v12.patch, 
> HBASE-12751-v13.patch, HBASE-12751-v14.patch, HBASE-12751-v15.patch, 
> HBASE-12751-v16.patch, HBASE-12751-v17.patch, HBASE-12751-v18.patch, 
> HBASE-12751-v2.patch, HBASE-12751-v3.patch, HBASE-12751-v4.patch, 
> HBASE-12751-v5.patch, HBASE-12751-v6.patch, HBASE-12751-v7.patch, 
> HBASE-12751-v8.patch, HBASE-12751-v9.patch, HBASE-12751.patch
>
>
> Right now every write operation grabs a row lock. This is to prevent values 
> from changing during a read modify write operation (increment or check and 
> put). However it limits parallelism in several different scenarios.
> If there are several puts to the same row but different columns or stores 
> then this is very limiting.
> If there are puts to the same column then mvcc number should ensure a 
> consistent ordering. So locking is not needed.
> However locking for check and put or increment is still needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12853) distributed write pattern to replace ad hoc 'salting'

2015-07-29 Thread Michael Segel (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12853?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646347#comment-14646347
 ] 

Michael Segel  commented on HBASE-12853:


@Sean, 

As I have said before... Apache doesn't indemnify committers (actually its the 
reverse) and there is no upside for me to offset the risk. 

In a nutshell it would be pointless in having a discussion on why I used the 
term trivial and why I rated this as a low priority. 

BTW, there are 11 watchers... why don't you ask those watchers who are also 
committers and leaders of the HBase project, why they didn't raise the 
priority? 

I don't wish to seem rude, but if you're going to lecture someone, you had 
better realize that some will ignore you, others will mock you... 

To your point, this was the first JIRA that I raised.  I assumed that those who 
volunteer their time would also take the time to assess the value of the 
suggestion.  Clearly not.  That was my mistake. 

To be honest, I lack the patience to suffer fools...  




> distributed write pattern to replace ad hoc 'salting'
> -
>
> Key: HBASE-12853
> URL: https://issues.apache.org/jira/browse/HBASE-12853
> Project: HBase
>  Issue Type: New Feature
>Reporter: Michael Segel 
> Fix For: 2.0.0
>
>
> In reviewing HBASE-11682 (Description of Hot Spotting), one of the issues is 
> that while 'salting' alleviated  regional hot spotting, it increased the 
> complexity required to utilize the data.  
> Through the use of coprocessors, it should be possible to offer a method 
> which distributes the data on write across the cluster and then manages 
> reading the data returning a sort ordered result set, abstracting the 
> underlying process. 
> On table creation, a flag is set to indicate that this is a parallel table. 
> On insert in to the table, if the flag is set to true then a prefix is added 
> to the key.  e.g. - or  server # is an integer between 1 and the number of region servers defined.  
> On read (scan) for each region server defined, a separate scan is created 
> adding the prefix. Since each scan will be in sort order, its possible to 
> strip the prefix and return the lowest value key from each of the subsets. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14163) hbase master stop loops both processes forever

2015-07-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646330#comment-14646330
 ] 

Allen Wittenauer commented on HBASE-14163:
--

So, I just set -Djava.net.preferIPv4Stack=true for HBASE_OPTS in hbase-env.sh 
and still see the same behavior, minus trying to use IPv6.

This is on Mac OS X 10.9.5 with JDK 1.7.0_67.

> hbase master stop loops both processes forever
> --
>
> Key: HBASE-14163
> URL: https://issues.apache.org/jira/browse/HBASE-14163
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0
>Reporter: Allen Wittenauer
>
> It would appear that there is an infinite loop in the zk client connection 
> code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646302#comment-14646302
 ] 

James Taylor commented on HBASE-14155:
--

Excellent work tracking this bug down, [~ram_krish]. [~lhofhansl] is on 
vacation, but maybe [~apurtell] or [~stack] has time to review?

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: HBASE-14155.patch, ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> {code}
> I've attempted to reproduce this in a sta

[jira] [Commented] (HBASE-14155) StackOverflowError in reverse scan

2015-07-29 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646284#comment-14646284
 ] 

ramkrishna.s.vasudevan commented on HBASE-14155:


Any comments on this patch.

> StackOverflowError in reverse scan
> --
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, Scanners
>Affects Versions: 1.1.0
>Reporter: James Taylor
>Assignee: ramkrishna.s.vasudevan
>Priority: Critical
>  Labels: Phoenix
> Attachments: HBASE-14155.patch, ReproReverseScanStackOverflow.java, 
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a 
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here: 
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory 
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline 
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +--+
> |K |
> +--+
> | a|
> | ab   |
> | b|
> +--+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get  a StackOverflowError at 
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
>   at 
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
>   at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
>   at 
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
>   at 
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
>   at 
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> {code}
> I've attempted to reproduce this in a standalone HBase unit test, but have 
> not been able to (but I'll attach my attempt which m

[jira] [Commented] (HBASE-14105) Add shell tests for Snapshot

2015-07-29 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646281#comment-14646281
 ] 

Ashish Singhi commented on HBASE-14105:
---

Ping for reviews!

> Add shell tests for Snapshot
> 
>
> Key: HBASE-14105
> URL: https://issues.apache.org/jira/browse/HBASE-14105
> Project: HBase
>  Issue Type: Sub-task
>  Components: test
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
> Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
> Attachments: HBASE-14105-0.98.patch, HBASE-14105-branch-1.0.patch, 
> HBASE-14105.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14087) ensure correct ASF policy compliant headers on source/docs

2015-07-29 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646145#comment-14646145
 ] 

Sean Busbey commented on HBASE-14087:
-

failure is not related (sorry to whoever at Apache Lens had their test killed 
:/ )

> ensure correct ASF policy compliant headers on source/docs
> --
>
> Key: HBASE-14087
> URL: https://issues.apache.org/jira/browse/HBASE-14087
> Project: HBase
>  Issue Type: Sub-task
>  Components: build
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Blocker
> Attachments: HBASE-14087.1.patch, HBASE-14087.2.patch, 
> HBASE-14087.2.patch
>
>
> * we have a couple of files that are missing their headers.
> * we have one file using old-style ASF copyrights



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14163) hbase master stop loops both processes forever

2015-07-29 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646153#comment-14646153
 ] 

Allen Wittenauer commented on HBASE-14163:
--

How long did it take for your hbase master to shutdown?

> hbase master stop loops both processes forever
> --
>
> Key: HBASE-14163
> URL: https://issues.apache.org/jira/browse/HBASE-14163
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0
>Reporter: Allen Wittenauer
>
> It would appear that there is an infinite loop in the zk client connection 
> code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-07-29 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646130#comment-14646130
 ] 

Anoop Sam John commented on HBASE-13408:


I agree to both these points. And this is exactly what HBASE-10713 is doing. 
Sorry I left that in btw as I got busy with completing off heap read path 
project. Will revive with that jira soon. Will appreciate joint work if 
possible.

> HBase In-Memory Memstore Compaction
> ---
>
> Key: HBASE-13408
> URL: https://issues.apache.org/jira/browse/HBASE-13408
> Project: HBase
>  Issue Type: New Feature
>Reporter: Eshcar Hillel
> Attachments: 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
> InMemoryMemstoreCompactionEvaluationResults.pdf
>
>
> A store unit holds a column family in a region, where the memstore is its 
> in-memory component. The memstore absorbs all updates to the store; from time 
> to time these updates are flushed to a file on disk, where they are 
> compacted. Unlike disk components, the memstore is not compacted until it is 
> written to the filesystem and optionally to block-cache. This may result in 
> underutilization of the memory due to duplicate entries per row, for example, 
> when hot data is continuously updated. 
> Generally, the faster the data is accumulated in memory, more flushes are 
> triggered, the data sinks to disk more frequently, slowing down retrieval of 
> data, even if very recent.
> In high-churn workloads, compacting the memstore can help maintain the data 
> in memory, and thereby speed up data retrieval. 
> We suggest a new compacted memstore with the following principles:
> 1.The data is kept in memory for as long as possible
> 2.Memstore data is either compacted or in process of being compacted 
> 3.Allow a panic mode, which may interrupt an in-progress compaction and 
> force a flush of part of the memstore.
> We suggest applying this optimization only to in-memory column families.
> A design document is attached.
> This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13408) HBase In-Memory Memstore Compaction

2015-07-29 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13408?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646101#comment-14646101
 ] 

Duo Zhang commented on HBASE-13408:
---

Things we talking here are all 'In Memory', so I do not think we need to modify 
WAL...

I think all the logic could be down in a special memstore implementation? For 
example, you can set the flush-size to 128M, and introduce a compact-size which 
only consider the active set size to 32M. When you find the active set reaches 
32M then you put it into pipeline and try to compact segments in pipeline to 
reduce memory usage. The upper layer does not care about how many segments you 
have, it only cares about the total memstore size. If it reaches 128M then a 
flush request is coming, then you should flush all data to disk. If there are 
many redundant cells then the total memstore will never reaches 128M, I think 
this is exactly what we want here? And this way you do not change the semantic 
of flush, the log truncating should also work as well.

And I think you can use some more compact data structures instead of skip list 
since the segments in pipeline are read only? This may bring some benefits even 
if we do not have many redundant cells.

What do you think? [~eshcar]. Sorry a bit late. Thanks.

> HBase In-Memory Memstore Compaction
> ---
>
> Key: HBASE-13408
> URL: https://issues.apache.org/jira/browse/HBASE-13408
> Project: HBase
>  Issue Type: New Feature
>Reporter: Eshcar Hillel
> Attachments: 
> HBaseIn-MemoryMemstoreCompactionDesignDocument-ver02.pdf, 
> HBaseIn-MemoryMemstoreCompactionDesignDocument.pdf, 
> InMemoryMemstoreCompactionEvaluationResults.pdf
>
>
> A store unit holds a column family in a region, where the memstore is its 
> in-memory component. The memstore absorbs all updates to the store; from time 
> to time these updates are flushed to a file on disk, where they are 
> compacted. Unlike disk components, the memstore is not compacted until it is 
> written to the filesystem and optionally to block-cache. This may result in 
> underutilization of the memory due to duplicate entries per row, for example, 
> when hot data is continuously updated. 
> Generally, the faster the data is accumulated in memory, more flushes are 
> triggered, the data sinks to disk more frequently, slowing down retrieval of 
> data, even if very recent.
> In high-churn workloads, compacting the memstore can help maintain the data 
> in memory, and thereby speed up data retrieval. 
> We suggest a new compacted memstore with the following principles:
> 1.The data is kept in memory for as long as possible
> 2.Memstore data is either compacted or in process of being compacted 
> 3.Allow a panic mode, which may interrupt an in-progress compaction and 
> force a flush of part of the memstore.
> We suggest applying this optimization only to in-memory column families.
> A design document is attached.
> This feature was previously discussed in HBASE-5311.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14165) The initial size of RWQueueRpcExecutor.queues should be (numWriteQueues + numReadQueues + numScanQueues)

2015-07-29 Thread cuijianwei (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cuijianwei updated HBASE-14165:
---
Attachment: HBASE-14165-trunk.patch

> The initial size of RWQueueRpcExecutor.queues should be (numWriteQueues + 
> numReadQueues + numScanQueues) 
> -
>
> Key: HBASE-14165
> URL: https://issues.apache.org/jira/browse/HBASE-14165
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 0.99.2
>Reporter: cuijianwei
>Priority: Minor
> Attachments: HBASE-14165-trunk.patch
>
>
> The RWQueueRpcExecutor.queues will be initialized as: 
> {code}
> queues = new ArrayList>(writeHandlersCount + 
> readHandlersCount);
> {code}
> It seems this could be improved as:
> {code}
> queues = new ArrayList>(numWriteQueues + 
> numReadQueues + numScanQueues);
> {code}
> Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-14165) The initial size of RWQueueRpcExecutor.queues should be (numWriteQueues + numReadQueues + numScanQueues)

2015-07-29 Thread cuijianwei (JIRA)
cuijianwei created HBASE-14165:
--

 Summary: The initial size of RWQueueRpcExecutor.queues should be 
(numWriteQueues + numReadQueues + numScanQueues) 
 Key: HBASE-14165
 URL: https://issues.apache.org/jira/browse/HBASE-14165
 Project: HBase
  Issue Type: Improvement
  Components: rpc
Affects Versions: 0.99.2
Reporter: cuijianwei
Priority: Minor


The RWQueueRpcExecutor.queues will be initialized as: 
{code}
queues = new ArrayList>(writeHandlersCount + 
readHandlersCount);
{code}
It seems this could be improved as:
{code}
queues = new ArrayList>(numWriteQueues + 
numReadQueues + numScanQueues);
{code}
Suggestions are welcomed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14163) hbase master stop loops both processes forever

2015-07-29 Thread Samir Ahmic (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645962#comment-14645962
 ] 

Samir Ahmic commented on HBASE-14163:
-

[~aw], i was unable to reproduce this issue on my laptop (Fedora 22) but 
looking at log you provided this line looks suspicious 
{code}
2015-07-28 13:16:12,603 INFO  
[10.248.3.81:53113.activeMasterManager-SendThread(localhost:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL 
(unknown error)
{code}

Looks like you resolve localhost to IPv6 address. Not sure where we are with 
supporting IPv6 on HBase

> hbase master stop loops both processes forever
> --
>
> Key: HBASE-14163
> URL: https://issues.apache.org/jira/browse/HBASE-14163
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Affects Versions: 2.0.0
>Reporter: Allen Wittenauer
>
> It would appear that there is an infinite loop in the zk client connection 
> code when performing a master stop when no external zk servers are configured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645703#comment-14645703
 ] 

Hadoop QA commented on HBASE-14154:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12747715/HBASE-14154-0.98.patch
  against 0.98 branch at commit 05de2ec5801fbba4577fb363f858a6e6f282c104.
  ATTACHMENT ID: 12747715

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 16 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
22 warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14924//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14924//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14924//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14924//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14924//console

This message is automatically generated.

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14154) DFS Replication should be configurable at column family level

2015-07-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14645678#comment-14645678
 ] 

Hadoop QA commented on HBASE-14154:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12747713/HBASE-14154-branch-1.patch
  against branch-1 branch at commit 05de2ec5801fbba4577fb363f858a6e6f282c104.
  ATTACHMENT ID: 12747713

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 20 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn post-site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14923//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14923//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14923//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/14923//console

This message is automatically generated.

> DFS Replication should be configurable at column family level
> -
>
> Key: HBASE-14154
> URL: https://issues.apache.org/jira/browse/HBASE-14154
> Project: HBase
>  Issue Type: New Feature
>Reporter: Ashish Singhi
>Assignee: Ashish Singhi
>Priority: Minor
> Fix For: 2.0.0, 0.98.14, 1.3.0
>
> Attachments: HBASE-14154-0.98.patch, HBASE-14154-branch-1.patch, 
> HBASE-14154.patch
>
>
> There are cases where a user wants to have a control on the number of hfile 
> copies he/she can have in the cluster.
> For eg: For a test table user would like to have only one copy instead of 
> three(default).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)