[
https://issues.apache.org/jira/browse/PHOENIX-7253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17824567#comment-17824567
]
ASF GitHub Bot commented on PHOENIX-7253:
-----------------------------------------
virajjasani commented on code in PR #1848:
URL: https://github.com/apache/phoenix/pull/1848#discussion_r1516955054
##########
phoenix-core-client/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java:
##########
@@ -1005,6 +1005,43 @@ private ScansWithRegionLocations getParallelScans(byte[]
startKey, byte[] stopKe
int regionIndex = 0;
int startRegionIndex = 0;
+
+ List<HRegionLocation> regionLocations;
+ if (isSalted && !isLocalIndex) {
+ // key prefix = salt num + view index id + tenant id
+ // If salting is used with tenant or view index id, scan start and
end
+ // rowkeys will not be empty. We need to generate region locations
for
+ // all the scan range such that we cover (each salt bucket num) +
(prefix starting from
+ // index position 1 to cover view index and/or tenant id and/or
remaining prefix).
+ if (scan.getStartRow().length > 0 && scan.getStopRow().length > 0)
{
+ regionLocations = new ArrayList<>();
+ for (int i = 0; i < getTable().getBucketNum(); i++) {
+ byte[] saltStartRegionKey = new
byte[scan.getStartRow().length];
+ saltStartRegionKey[0] = (byte) i;
+ System.arraycopy(scan.getStartRow(), 1,
saltStartRegionKey, 1,
+ scan.getStartRow().length - 1);
+
+ byte[] saltStopRegionKey = new
byte[scan.getStopRow().length];
+ saltStopRegionKey[0] = (byte) i;
+ System.arraycopy(scan.getStopRow(), 1, saltStopRegionKey,
1,
+ scan.getStopRow().length - 1);
+
+ regionLocations.addAll(
+ getRegionBoundaries(scanGrouper, saltStartRegionKey,
saltStopRegionKey));
+ }
Review Comment:
`traverseAllRegions` is only used here, to use scan boundaries:
```
} else if (!traverseAllRegions) {
byte[] scanStartRow = scan.getStartRow();
if (scanStartRow.length != 0 && Bytes.compareTo(scanStartRow,
startKey) > 0) {
startRegionBoundaryKey = startKey = scanStartRow;
}
byte[] scanStopRow = scan.getStopRow();
if (stopKey.length == 0
|| (scanStopRow.length != 0 &&
Bytes.compareTo(scanStopRow, stopKey) < 0)) {
stopRegionBoundaryKey = stopKey = scanStopRow;
}
}
```
Here, with this work, we will be able to restrict the range scan for salted
table to only specific regions, given that single salt can have many more
regions as the table size grows.
Moreover, salted table with tenant id makes the restriction even narrower
for large table. Hence, with this work, we are covering optimization for large
salted tables.
> Perf improvement for non-full scan queries on large table
> ---------------------------------------------------------
>
> Key: PHOENIX-7253
> URL: https://issues.apache.org/jira/browse/PHOENIX-7253
> Project: Phoenix
> Issue Type: Improvement
> Affects Versions: 5.2.0, 5.1.3
> Reporter: Viraj Jasani
> Assignee: Viraj Jasani
> Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Any considerably large table with more than 100k regions can give problematic
> performance if we access all region locations from meta for the given table
> before generating parallel or sequential scans for the given query. The perf
> impact can really hurt range scan queries.
> Consider a table with hundreds of thousands of tenant views. Unless the query
> is strict point lookup, any query on any tenant view would end up retrieving
> region locations of all regions of the base table. In case if IOException is
> thrown by HBase client during any region location lookup in meta, we only
> perform single retry.
> Proposal:
> # All non point lookup queries should only retrieve region locations that
> cover the scan boundary. Avoid fetching all region locations of the base
> table.
> # Make retries configurable with higher default value.
>
> Sample stacktrace from the multiple failures observed:
> {code:java}
> java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table regions.Stack
> trace: java.sql.SQLException: ERROR 1102 (XCL02): Cannot get all table
> regions.
> at
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:620)
> at
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:781)
> at
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
> at
> org.apache.phoenix.query.DelegateConnectionQueryServices.getAllTableRegions(DelegateConnectionQueryServices.java:87)
> at
> org.apache.phoenix.iterate.DefaultParallelScanGrouper.getRegionBoundaries(DefaultParallelScanGrouper.java:74)
> at
> org.apache.phoenix.iterate.BaseResultIterators.getRegionBoundaries(BaseResultIterators.java:587)
> at
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:936)
> at
> org.apache.phoenix.iterate.BaseResultIterators.getParallelScans(BaseResultIterators.java:669)
> at
> org.apache.phoenix.iterate.BaseResultIterators.<init>(BaseResultIterators.java:555)
> at
> org.apache.phoenix.iterate.SerialIterators.<init>(SerialIterators.java:69)
> at org.apache.phoenix.execute.ScanPlan.newIterator(ScanPlan.java:278)
> at
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:374)
> at
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:222)
> at
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:217)
> at
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:212)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:370)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:328)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:328)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:320)
> at
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:188)
> ...
> ...
> Caused by: java.io.InterruptedIOException: Origin: InterruptedException
> at
> org.apache.hadoop.hbase.util.ExceptionUtil.asInterrupt(ExceptionUtil.java:72)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1129)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:994)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:895)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:881)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:851)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.getRegionLocation(ConnectionImplementation.java:730)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getAllTableRegions(ConnectionQueryServicesImpl.java:766)
> ... 254 more
> Caused by: java.lang.InterruptedException
> at
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireNanos(AbstractQueuedSynchronizer.java:982)
> at
> java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireNanos(AbstractQueuedSynchronizer.java:1288)
> at
> java.base/java.util.concurrent.locks.ReentrantLock.tryLock(ReentrantLock.java:424)
> at
> org.apache.hadoop.hbase.client.ConnectionImplementation.takeUserRegionLock(ConnectionImplementation.java:1117)
> ... 264 more {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)