[ 
https://issues.apache.org/jira/browse/PHOENIX-4692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16480275#comment-16480275
 ] 

Maryann Xue commented on PHOENIX-4692:
--------------------------------------

So I guess here "dynamic filter" kicked in (as part of an optimization), and 
that's why the same scan within the same StatementContext got compiled more 
than once. I'll try fixing it from the hash-join side and see if I can make it 
work.

> ArrayIndexOutOfBoundsException in ScanRanges.intersectScan
> ----------------------------------------------------------
>
>                 Key: PHOENIX-4692
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4692
>             Project: Phoenix
>          Issue Type: Bug
>    Affects Versions: 4.14.0
>            Reporter: Sergey Soldatov
>            Assignee: James Taylor
>            Priority: Major
>             Fix For: 4.14.0, 5.0.0
>
>         Attachments: PHOENIX-4692-IT.patch, PHOENIX-4692_v1.patch
>
>
> ScanRanges.intersectScan may fail with AIOOBE if a salted table is used.
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 1
>       at org.apache.phoenix.util.ScanUtil.getKey(ScanUtil.java:333)
>       at org.apache.phoenix.util.ScanUtil.getMinKey(ScanUtil.java:317)
>       at 
> org.apache.phoenix.compile.ScanRanges.intersectScan(ScanRanges.java:371)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:1074)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:631)
>       at 
> org.apache.phoenix.iterate.BaseResultIterators.<init>(BaseResultIterators.java:501)
>       at 
> org.apache.phoenix.iterate.ParallelIterators.<init>(ParallelIterators.java:62)
>       at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:274)
>       at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:364)
>       at 
> org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:234)
>       at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:144)
>       at 
> org.apache.phoenix.execute.DelegateQueryPlan.iterator(DelegateQueryPlan.java:139)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:293)
>       at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:292)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:285)
>       at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:1798)
> {noformat}
> Script to reproduce:
> {noformat}
> CREATE TABLE TEST (PK1 INTEGER NOT NULL, PK2 INTEGER NOT NULL,  ID1 INTEGER, 
> ID2 INTEGER CONSTRAINT PK PRIMARY KEY(PK1 , PK2))SALT_BUCKETS = 4;
> upsert into test values (1,1,1,1);
> upsert into test values (2,2,2,2);
> upsert into test values (2,3,1,2);
> create view TEST_VIEW as select * from TEST where PK1 in (1,2);
> CREATE INDEX IDX_VIEW ON TEST_VIEW (ID1);
>   select /*+ INDEX(TEST_VIEW IDX_VIEW) */ * from TEST_VIEW where ID1 = 1  
> ORDER BY ID2 LIMIT 500 OFFSET 0;
> {noformat}
> That happens because we have a point lookup optimization which reduces 
> RowKeySchema to a single field, while we have more than one slot due salting. 
> [~jamestaylor] can you please take a look? I'm not sure whether it should be 
> fixed on the ScanUtil level or we just should not use point lookup in such 
> cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to