[
https://issues.apache.org/jira/browse/HBASE-14155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14646878#comment-14646878
]
Hadoop QA commented on HBASE-14155:
-----------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12747847/14155-branch-1.txt
against branch-1 branch at commit 4ce6f486d063553a78ed1d60670e68564d61a483.
ATTACHMENT ID: 12747847
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 5 new
or modified tests.
{color:green}+1 hadoop versions{color}. The patch compiles with all
supported hadoop versions (2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.0 2.7.0)
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 protoc{color}. The applied patch does not increase the
total number of protoc compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 checkstyle{color}. The applied patch does not increase the
total number of checkstyle errors
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 2.0.3) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 lineLengths{color}. The patch does not introduce lines
longer than 100
{color:green}+1 site{color}. The mvn post-site goal succeeds with this patch.
{color:green}+1 core tests{color}. The patch passed unit tests in .
Test results:
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//testReport/
Release Findbugs (version 2.0.3) warnings:
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors:
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//artifact/patchprocess/checkstyle-aggregate.html
Console output:
https://builds.apache.org/job/PreCommit-HBASE-Build/14928//console
This message is automatically generated.
> StackOverflowError in reverse scan
> ----------------------------------
>
> Key: HBASE-14155
> URL: https://issues.apache.org/jira/browse/HBASE-14155
> Project: HBase
> Issue Type: Bug
> Components: regionserver, Scanners
> Affects Versions: 1.1.0
> Reporter: James Taylor
> Assignee: ramkrishna.s.vasudevan
> Priority: Critical
> Labels: Phoenix
> Attachments: 14155-branch-1.txt, HBASE-14155.patch,
> ReproReverseScanStackOverflow.java,
> ReproReverseScanStackOverflowCoprocessor.java
>
>
> A stack overflow may occur when a reverse scan is done. To reproduce (on a
> Mac), use the following steps:
> - Download the Phoenix 4.5.0 RC here:
> https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.5.0-HBase-1.1-rc0/bin/
> - Copy the phoenix-4.5.0-HBase-1.1-server.jar into the HBase lib directory
> (removing any earlier Phoenix version if there was one installed)
> - Stop and restart HBase
> - From the bin directory of the Phoenix binary distribution, start sqlline
> like this: ./sqlline.py localhost
> - Create a new table and populate it like this:
> {code}
> create table desctest (k varchar primary key desc);
> upsert into desctest values ('a');
> upsert into desctest values ('ab');
> upsert into desctest values ('b');
> {code}
> - Note that the following query works fine at this point:
> {code}
> select * from desctest order by k;
> +------------------------------------------+
> | K |
> +------------------------------------------+
> | a |
> | ab |
> | b |
> +------------------------------------------+
> {code}
> - Stop and start HBase
> - Rerun the above query again and you'll get a StackOverflowError at
> StoreFileScanner.seekToPreviousRow()
> {code}
> select * from desctest order by k;
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException:
> org.apache.phoenix.exception.PhoenixIOException:
> org.apache.hadoop.hbase.DoNotRetryIOException:
> DESCTEST,,1437847235264.a74d70e6a8b36e24d1ea1a70edb0cdf7.: null
> at
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
> at
> org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)
> at
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$2.nextRaw(BaseScannerRegionObserver.java:352)
> at
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:77)
> at
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2393)
> at
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2112)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
> at
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.StackOverflowError
> at
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numChunks(ChecksumUtil.java:201)
> at
> org.apache.hadoop.hbase.io.hfile.ChecksumUtil.numBytes(ChecksumUtil.java:189)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.totalChecksumBytes(HFileBlock.java:1826)
> at
> org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:356)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getEncodedBuffer(HFileReaderV2.java:1211)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$EncodedScannerV2.getFirstKeyInBlock(HFileReaderV2.java:1307)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:657)
> at
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.seekBefore(HFileReaderV2.java:646)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:425)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> at
> org.apache.hadoop.hbase.regionserver.StoreFileScanner.seekToPreviousRow(StoreFileScanner.java:449)
> {code}
> I've attempted to reproduce this in a standalone HBase unit test, but have
> not been able to (but I'll attach my attempt which mimics what Phoenix is
> doing).
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)