[ 
https://issues.apache.org/jira/browse/PHOENIX-4010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16159878#comment-16159878
 ] 

Hudson commented on PHOENIX-4010:
---------------------------------

SUCCESS: Integrated in Jenkins build Phoenix-master #1785 (See 
[https://builds.apache.org/job/Phoenix-master/1785/])
PHOENIX-4010 Hash Join cache may not be send to all regionservers when 
(ankitsinghal59: rev 7865a59b0bbc4ea5e8ab775b920f5f608115707b)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/iterate/DelayedTableResultIteratorFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/HashJoinRegionScanner.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/ScanPlan.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/SerialIterators.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
* (add) phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinCacheIT.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/query/QueryConstants.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/DefaultTableResultIteratorFactory.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIteratorFactory.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseJoinIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/ParallelIterators.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/HashJoinIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/iterate/TableResultIterator.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/join/HashCacheClient.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ByteUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/AggregatePlan.java
* (edit) phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/execute/LiteralResultIterationPlan.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/query/ParallelIteratorsSplitTest.java
* (add) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/HashJoinCacheNotFoundException.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java


> Hash Join cache may not be send to all regionservers when we have stale HBase 
> meta cache
> ----------------------------------------------------------------------------------------
>
>                 Key: PHOENIX-4010
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-4010
>             Project: Phoenix
>          Issue Type: Bug
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.12.0
>
>         Attachments: PHOENIX-4010.patch, PHOENIX-4010_v1.patch, 
> PHOENIX-4010_v2.patch, PHOENIX-4010_v2_rebased_1.patch, 
> PHOENIX-4010_v2_rebased.patch
>
>
>  If the region locations changed and our HBase meta cache is not updated then 
> we might not be sending hash join cache to all region servers hosting the 
> regions.
> ConnectionQueryServicesImpl#getAllTableRegions
> {code}
> boolean reload =false;
>         while (true) {
>             try {
>                 // We could surface the package projected 
> HConnectionImplementation.getNumberOfCachedRegionLocations
>                 // to get the sizing info we need, but this would require a 
> new class in the same package and a cast
>                 // to this implementation class, so it's probably not worth 
> it.
>                 List<HRegionLocation> locations = Lists.newArrayList();
>                 byte[] currentKey = HConstants.EMPTY_START_ROW;
>                 do {
>                     HRegionLocation regionLocation = 
> connection.getRegionLocation(
>                             TableName.valueOf(tableName), currentKey, reload);
>                     locations.add(regionLocation);
>                     currentKey = regionLocation.getRegionInfo().getEndKey();
>                 } while (!Bytes.equals(currentKey, HConstants.EMPTY_END_ROW));
>                 return locations;
> {code}
> Skipping duplicate servers in ServerCacheClient#addServerCache
> {code}
> List<HRegionLocation> locations = 
> services.getAllTableRegions(cacheUsingTable.getPhysicalName().getBytes());
>             int nRegions = locations.size();
>             
> .....
>  if ( ! servers.contains(entry) && 
>                         keyRanges.intersectRegion(regionStartKey, 
> regionEndKey,
>                                 cacheUsingTable.getIndexType() == 
> IndexType.LOCAL)) {  
>                     // Call RPC once per server
>                     servers.add(entry);
> {code}
> For eg:- Table ’T’ has two regions R1 and R2 originally hosted on 
> regionserver RS1. 
> while Phoenix/Hbase connection is still active, R2 is transitioned to RS2 ,  
> but stale meta cache will still give old region locations i.e R1 and R2 on 
> RS1 and when we start copying hash table, we copy for R1 and skip R2 as they 
> are hosted on same regionserver. so, the query on a table will fail as it 
> will unable to find hash table cache on RS2 for processing regions R2.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to