[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-06-14 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r195421295
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java 
---
@@ -1305,14 +1305,25 @@ public ScanWrapper(Scan scan) {
 Scan oldScan = scanPair.getFirst();
 byte[] startKey = 
oldScan.getAttribute(SCAN_ACTUAL_START_ROW);
 if(e2 instanceof 
HashJoinCacheNotFoundException){
+System.out.println("Handling 
HashJoinCacheNotFoundException");
--- End diff --

Minor nit: change to log message or remove System.out.println calls


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188783687
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
--- End diff --

Fixed it here. FWIW some files mix tabs / spaces


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188783472
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -383,7 +384,7 @@ protected QueryPlan 
compileJoinQuery(JoinCompiler.Strategy strategy, StatementCo
 new PTable[]{lhsTable}, new int[]{fieldPosition}, 
postJoinFilterExpression, QueryUtil.getOffsetLimit(limit, offset));
 Pair keyRangeExpressions = new 
Pair(null, null);
 getKeyExpressionCombinations(keyRangeExpressions, context, 
joinTable.getStatement(), rhsTableRef, type, joinExpressions, hashExpressions);
-return HashJoinPlan.create(joinTable.getStatement(), 
rhsPlan, joinInfo, new HashSubPlan[]{new HashSubPlan(0, lhsPlan, 
hashExpressions, false, keyRangeExpressions.getFirst(), 
keyRangeExpressions.getSecond())});
+return HashJoinPlan.create(joinTable.getStatement(), 
rhsPlan, joinInfo, new HashSubPlan[]{new HashSubPlan(0, lhsPlan, 
hashExpressions, false, false, keyRangeExpressions.getFirst(), 
keyRangeExpressions.getSecond())});
--- End diff --

@maryannxue I've changed the code here to use the `usePersistentCache` 
flag, but I'm not sure how to force a `HASH_BUILD_LEFT` strategy. The test I've 
put in the integration test file does not actually use that strategy. Do you 
know of an easy way to force a left table hash join?


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188778970
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java 
---
@@ -159,6 +161,10 @@ protected void finalize() throws Throwable {
 }
 
 private void freeMemory() {
+// System.out.println("Free memory! " + size);
--- End diff --

Done


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188778824
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -314,7 +314,8 @@ protected QueryPlan 
compileJoinQuery(JoinCompiler.Strategy strategy, StatementCo
 if (i < count - 1) {
 fieldPositions[i + 1] = fieldPositions[i] + 
(tables[i] == null ? 0 : (tables[i].getColumns().size() - 
tables[i].getPKColumns().size()));
 }
-hashPlans[i] = new HashSubPlan(i, subPlans[i], 
optimized ? null : hashExpressions, joinSpec.isSingleValueOnly(), 
keyRangeLhsExpression, keyRangeRhsExpression);
+boolean usePersistentCache = 
joinTable.getStatement().getHint().hasHint(Hint.USE_PERSISTENT_CACHE);
--- End diff --

Done


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188777276
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
+   }
+}
+})
+.build();
+}
 
-@Override
+private void evictInactiveEntries(long bytesNeeded) {
+CacheEntry[] entries = 
getPersistentServerCaches().asMap().values().toArray(new CacheEntry[]{});
+Arrays.sort(entries);
+long available = this.getMemoryManager().getAvailableMemory();
+for (int i = 0; i < entries.length && available < bytesNeeded; 
i++) {
+CacheEntry entry = entries[i];
+if (!entry.isLive()) {
+   getServerCaches().invalidate(entry.getCacheId());
+   
getPersistentServerCaches().invalidate(entry.getCacheId());
+available = this.getMemoryManager().getAvailableMemory();
+}
+}
+}
+
+private CacheEntry maybeGet(ImmutableBytesPtr cacheId) {
+maybePromote(cacheId);
+CacheEntry entry = getServerCaches().getIfPresent(cacheId);
+return entry;
+}
+
+private void maybePromote(ImmutableBytesPtr cacheId) {
+CacheEntry entry = 
getPersistentServerCaches().getIfPresent(cacheId);
+if (entry == null) {
+return;
+}
+getServerCaches().put(cacheId, entry);
+}
+
+private void maybeDemote(ImmutableBytesPtr cacheId) {
+CacheEntry entry = getServerCaches().getIfPresent(cacheId);
+if (entry == null) {
+return;
+}
+entry.decrementLiveQueryCount();
+if (!entry.isLive()) {
+getServerCaches().invalidate(cacheId);
+}
+}
+
+public void debugPrintCaches() {
+   System.out.println("Live cache:" + getServerCaches());
+   for (ImmutableBytesPtr key : 
getServerCaches().asMap().keySet()) {
+   System.out.println("- " + 
Hex.encodeHexString(key.get()) +
+   " -> " + 
getServerCaches().getIfPresent(key).size +
+   " lq:" + 

[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-16 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188777161
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
+   }
+}
+})
+.build();
+}
 
-@Override
+private void evictInactiveEntries(long bytesNeeded) {
+CacheEntry[] entries = 
getPersistentServerCaches().asMap().values().toArray(new CacheEntry[]{});
+Arrays.sort(entries);
+long available = this.getMemoryManager().getAvailableMemory();
+for (int i = 0; i < entries.length && available < bytesNeeded; 
i++) {
+CacheEntry entry = entries[i];
+if (!entry.isLive()) {
+   getServerCaches().invalidate(entry.getCacheId());
+   
getPersistentServerCaches().invalidate(entry.getCacheId());
+available = this.getMemoryManager().getAvailableMemory();
+}
+}
+}
+
+private CacheEntry maybeGet(ImmutableBytesPtr cacheId) {
--- End diff --

Done, also removed `maybeDemote`


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-15 Thread ortutay
Github user ortutay commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r188461941
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java ---
@@ -216,22 +234,146 @@ public void close() throws SQLException {
 }
 }
 }
-
+}
+
+public ServerCache checkServerCache(final byte[] cacheId, ScanRanges 
keyRanges, final TableRef cacheUsingTableRef,
--- End diff --

Yea makes sense. For this, I'm thinking of modifying the retry logic in 
BaseResultIterators to disable persistent caching on retries. I'm thinking I 
can add a get/set'er pair to the "context" that can be used for this


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-05-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r187393698
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java ---
@@ -216,22 +234,146 @@ public void close() throws SQLException {
 }
 }
 }
-
+}
+
+public ServerCache checkServerCache(final byte[] cacheId, ScanRanges 
keyRanges, final TableRef cacheUsingTableRef,
--- End diff --

FYI, we already have a mechanism for this. If the server throws a 
HashJoinCacheNotFoundException, then the client will react by sending the hash 
cache to the region servers that don't have it.


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183133375
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -314,7 +314,8 @@ protected QueryPlan 
compileJoinQuery(JoinCompiler.Strategy strategy, StatementCo
 if (i < count - 1) {
 fieldPositions[i + 1] = fieldPositions[i] + 
(tables[i] == null ? 0 : (tables[i].getColumns().size() - 
tables[i].getPKColumns().size()));
 }
-hashPlans[i] = new HashSubPlan(i, subPlans[i], 
optimized ? null : hashExpressions, joinSpec.isSingleValueOnly(), 
keyRangeLhsExpression, keyRangeRhsExpression);
+boolean usePersistentCache = 
joinTable.getStatement().getHint().hasHint(Hint.USE_PERSISTENT_CACHE);
--- End diff --

We can make "usePersistentCache" a member of QueryCompiler and initialize 
it in the beginning just like "noChildParentJoinOptimization".


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183127442
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java ---
@@ -216,22 +234,146 @@ public void close() throws SQLException {
 }
 }
 }
-
+}
+
+public ServerCache checkServerCache(final byte[] cacheId, ScanRanges 
keyRanges, final TableRef cacheUsingTableRef,
--- End diff --

I am thinking here, can we do this differently?
Instead of making RPC calls of "checkServerCache" for the first and every 
subsequent queries, we do NOT make any calls, neither "check" or "add" when the 
persistent-cache hint is available and catches the 
{{PersistentCacheNotFoundException}} on the first attempt (or later attempts if 
somehow the cache has been evicted) and then try adding the cache all over 
again. I think it will be more efficient in general.


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183130270
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
+   }
+}
+})
+.build();
+}
 
-@Override
+private void evictInactiveEntries(long bytesNeeded) {
+CacheEntry[] entries = 
getPersistentServerCaches().asMap().values().toArray(new CacheEntry[]{});
+Arrays.sort(entries);
+long available = this.getMemoryManager().getAvailableMemory();
+for (int i = 0; i < entries.length && available < bytesNeeded; 
i++) {
+CacheEntry entry = entries[i];
+if (!entry.isLive()) {
+   getServerCaches().invalidate(entry.getCacheId());
+   
getPersistentServerCaches().invalidate(entry.getCacheId());
+available = this.getMemoryManager().getAvailableMemory();
+}
+}
+}
+
+private CacheEntry maybeGet(ImmutableBytesPtr cacheId) {
+maybePromote(cacheId);
+CacheEntry entry = getServerCaches().getIfPresent(cacheId);
+return entry;
+}
+
+private void maybePromote(ImmutableBytesPtr cacheId) {
+CacheEntry entry = 
getPersistentServerCaches().getIfPresent(cacheId);
+if (entry == null) {
+return;
+}
+getServerCaches().put(cacheId, entry);
+}
+
+private void maybeDemote(ImmutableBytesPtr cacheId) {
+CacheEntry entry = getServerCaches().getIfPresent(cacheId);
+if (entry == null) {
+return;
+}
+entry.decrementLiveQueryCount();
+if (!entry.isLive()) {
+getServerCaches().invalidate(cacheId);
+}
+}
+
+public void debugPrintCaches() {
+   System.out.println("Live cache:" + getServerCaches());
+   for (ImmutableBytesPtr key : 
getServerCaches().asMap().keySet()) {
+   System.out.println("- " + 
Hex.encodeHexString(key.get()) +
+   " -> " + 
getServerCaches().getIfPresent(key).size +
+   " lq:" + 

[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183133702
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/QueryCompiler.java ---
@@ -383,7 +384,7 @@ protected QueryPlan 
compileJoinQuery(JoinCompiler.Strategy strategy, StatementCo
 new PTable[]{lhsTable}, new int[]{fieldPosition}, 
postJoinFilterExpression, QueryUtil.getOffsetLimit(limit, offset));
 Pair keyRangeExpressions = new 
Pair(null, null);
 getKeyExpressionCombinations(keyRangeExpressions, context, 
joinTable.getStatement(), rhsTableRef, type, joinExpressions, hashExpressions);
-return HashJoinPlan.create(joinTable.getStatement(), 
rhsPlan, joinInfo, new HashSubPlan[]{new HashSubPlan(0, lhsPlan, 
hashExpressions, false, keyRangeExpressions.getFirst(), 
keyRangeExpressions.getSecond())});
+return HashJoinPlan.create(joinTable.getStatement(), 
rhsPlan, joinInfo, new HashSubPlan[]{new HashSubPlan(0, lhsPlan, 
hashExpressions, false, false, keyRangeExpressions.getFirst(), 
keyRangeExpressions.getSecond())});
--- End diff --

This is also hash-join but with left table as build side. I think we should 
be able to use persistent cache as well. And could you also add another test to 
cover this point?


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183136068
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java 
---
@@ -159,6 +161,10 @@ protected void finalize() throws Throwable {
 }
 
 private void freeMemory() {
+// System.out.println("Free memory! " + size);
--- End diff --

I think this a mistake, missed from your code clean-up.


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183128249
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
--- End diff --

Looks like the indentation is not right here. Can you perform a check 
through the entire PR?


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183135634
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/execute/HashJoinPlan.java ---
@@ -488,11 +502,20 @@ public ServerCache execute(HashJoinPlan parent) 
throws SQLException {
 if (hashExpressions != null) {
 ResultIterator iterator = plan.iterator();
 try {
-cache =
-parent.hashClient.addHashCache(ranges, 
iterator,
+final byte[] cacheId;
+if (usePersistentCache) {
--- End diff --

Like I said in an earlier comment, if using persistent cache, we don't need 
to call "check" or "add" cache here, instead we catch a specific Exception and 
addServerCache at this point. Still, we might wanna return "null" if this is 
persistent cache so later on the cache "close" method won't be called by the 
HashJoinPlan clean-up routine.


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183128592
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/cache/TenantCacheImpl.java ---
@@ -77,57 +153,164 @@ public MemoryManager getMemoryManager() {
 return memoryManager;
 }
 
-private Cache getServerCaches() {
+private Cache getServerCaches() {
 /* Delay creation of this map until it's needed */
 if (serverCaches == null) {
 synchronized(this) {
 if (serverCaches == null) {
-serverCaches = CacheBuilder.newBuilder()
-.expireAfterAccess(maxTimeToLiveMs, 
TimeUnit.MILLISECONDS)
-.ticker(getTicker())
-.removalListener(new 
RemovalListener(){
-@Override
-public void 
onRemoval(RemovalNotification notification) {
-
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
-}
-})
-.build();
+serverCaches = buildCache(maxTimeToLiveMs, false);
 }
 }
 }
 return serverCaches;
 }
+
+private Cache 
getPersistentServerCaches() {
+/* Delay creation of this map until it's needed */
+if (persistentServerCaches == null) {
+synchronized(this) {
+if (persistentServerCaches == null) {
+persistentServerCaches = 
buildCache(maxPersistenceTimeToLiveMs, true);
+}
+}
+}
+return persistentServerCaches;
+}
+
+private Cache buildCache(final int ttl, 
final boolean isPersistent) {
+return CacheBuilder.newBuilder()
+.expireAfterAccess(ttl, TimeUnit.MILLISECONDS)
+.ticker(getTicker())
+.removalListener(new RemovalListener(){
+@Override
+public void 
onRemoval(RemovalNotification notification) {
+   if (isPersistent || 
!notification.getValue().getUsePersistentCache()) {
+
Closeables.closeAllQuietly(Collections.singletonList(notification.getValue()));
+   }
+}
+})
+.build();
+}
 
-@Override
+private void evictInactiveEntries(long bytesNeeded) {
+CacheEntry[] entries = 
getPersistentServerCaches().asMap().values().toArray(new CacheEntry[]{});
+Arrays.sort(entries);
+long available = this.getMemoryManager().getAvailableMemory();
+for (int i = 0; i < entries.length && available < bytesNeeded; 
i++) {
+CacheEntry entry = entries[i];
+if (!entry.isLive()) {
+   getServerCaches().invalidate(entry.getCacheId());
+   
getPersistentServerCaches().invalidate(entry.getCacheId());
+available = this.getMemoryManager().getAvailableMemory();
+}
+}
+}
+
+private CacheEntry maybeGet(ImmutableBytesPtr cacheId) {
--- End diff --

Think we can just call it "get" or "getIfPresent" and merge 
{{maybePromote}} into this call.


---


[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-20 Thread maryannxue
Github user maryannxue commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/298#discussion_r183126183
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinPersistentCacheIT.java
 ---
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end.join;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import org.apache.phoenix.end2end.join.HashJoinCacheIT.InvalidateHashCache;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+public class HashJoinPersistentCacheIT extends BaseJoinIT {
+
+@Override
+protected String getTableName(Connection conn, String virtualName) 
throws Exception {
+String realName = super.getTableName(conn, virtualName);
+TestUtil.addCoprocessor(conn, 
SchemaUtil.normalizeFullTableName(realName), InvalidateHashCache.class);
+return realName;
+}
+
+@Test
+public void testPersistentCache() throws Exception {
+  Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+  Connection conn = DriverManager.getConnection(getUrl(), props);
+
+  createTestTable(getUrl(), "CREATE TABLE IF NOT EXISTS states (state 
CHAR(2) NOT NULL, name VARCHAR NOT NULL CONSTRAINT my_pk PRIMARY KEY (state, 
name))");
+  createTestTable(getUrl(), "CREATE TABLE IF NOT EXISTS cities (state 
CHAR(2) NOT NULL, city VARCHAR NOT NULL, population BIGINT CONSTRAINT my_pk 
PRIMARY KEY (state, city))");
+
+  conn.prepareStatement("UPSERT INTO states VALUES ('CA', 
'California')").executeUpdate();
+  conn.prepareStatement("UPSERT INTO states VALUES ('AZ', 
'Arizona')").executeUpdate();
+  conn.prepareStatement("UPSERT INTO cities VALUES ('CA', 'San 
Francisco', 5)").executeUpdate();
+  conn.prepareStatement("UPSERT INTO cities VALUES ('CA', 
'Sacramento', 3000)").executeUpdate();
+  conn.prepareStatement("UPSERT INTO cities VALUES ('AZ', 'Phoenix', 
2)").executeUpdate();
+  conn.commit();
+
+  /* First, run query without using the persistent cache. This should 
return
+   * different results after an UPSERT takes place.
+   */
+  ResultSet rs = conn.prepareStatement("SELECT SUM(population) FROM 
states s JOIN cities c ON c.state = s.state").executeQuery();
+  rs.next();
+  int population1 = rs.getInt(1);
+
+  conn.prepareStatement("UPSERT INTO cities VALUES ('CA', 'Mt View', 
1500)").executeUpdate();
+  conn.commit();
+  rs = conn.prepareStatement("SELECT SUM(population) FROM states s 
JOIN cities c ON c.state = s.state").executeQuery();
+  rs.next();
+  int population2 = rs.getInt(1);
+  
+  assertEquals(73000, population1);
+  assertEquals(74500, population2);
+  
+  /* Second, run query using the persistent cache. This should return 
the
+   * same results after an UPSERT takes place.
+   */
+  rs = conn.prepareStatement("SELECT /*+ USE_PERSISTENT_CACHE */ 
SUM(population) FROM states s JOIN cities c ON c.state = 
s.state").executeQuery();
+  rs.next();
+  int population3 = rs.getInt(1);
+
+  conn.prepareStatement("UPSERT INTO cities VALUES ('CA', 'Palo Alto', 
2000)").executeUpdate();
+  conn.commit();
+  rs = conn.prepareStatement("SELECT /*+ USE_PERSISTENT_CACHE */ 
SUM(population) FROM states s JOIN cities c ON c.state = 
s.state").executeQuery();
+  rs.next();
+  int population4 = rs.getInt(1);
+  rs = conn.prepareStatement("SELECT SUM(population) FROM states s 
JOIN cities c ON c.state = s.state").executeQuery();
  

[GitHub] phoenix pull request #298: PHOENIX-4666 Persistent subquery cache for hash j...

2018-04-17 Thread ortutay
GitHub user ortutay opened a pull request:

https://github.com/apache/phoenix/pull/298

PHOENIX-4666 Persistent subquery cache for hash joins



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ortutay/phoenix PHOENIX-4666-subquery-cache

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/298.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #298


commit 77460c37697b2a112cf6ed345356a16da08dd51c
Author: Marcell Ortutay 
Date:   2018-03-29T19:59:03Z

PHOENIX-4666 Persistent subquery cache for hash joins

commit d1fc310e3d0df772c0aeb1673d2b64d01f495d27
Author: Marcell Ortutay 
Date:   2018-04-17T20:51:00Z

PHOENIX-4666 Add tests for TenantCacheTest




---