Build failed in Jenkins: Phoenix | Master #1472

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can 
be

--
[...truncated 790234 lines...]
2016-11-04 09:58:17,981 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:18,030 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:18,042 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:18,050 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:18,142 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:18,706 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:18,897 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:21,706 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:21,897 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:24,706 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:24,897 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:25,426 INFO  [B.defaultRpcServer.handler=1,queue=0,port=33218] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=3,queue=0,port=33218] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=2,queue=0,port=33218] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1363, waiting for 1  actions to finish
2016-11-04 09:58:25,495 INFO  [B.defaultRpcServer.handler=4,queue=0,port=33218] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1363, waiting for 1  actions to finish
2016-11-04 09:58:25,506 INFO  [B.defaultRpcServer.handler=0,queue=0,port=33218] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1363, waiting for 1  actions to finish
2016-11-04 09:58:27,707 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@4c030235]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:27,898 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@3b4f6a6e]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 09:58:27,991 INFO  [B.defaultRpcServer.handler=3,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:28,041 INFO  [B.defaultRpcServer.handler=0,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:28,052 INFO  [B.defaultRpcServer.handler=4,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:28,064 INFO  [B.defaultRpcServer.handler=1,queue=0,port=60578] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #631, 
waiting for 1  actions to finish
2016-11-04 09:58:28,153 INFO  [B.defaultRpcServer.handler=2,queue=0,port=60578] 
org.apache.had

Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-calcite/35/

2016-11-04 Thread Apache Jenkins Server
[...truncated 28 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-calcite/35/


Affected test class(es):
Set(['org.apache.phoenix.end2end.ViewIT'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 51918bb81 -> e4e1570b8


PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e4e1570b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e4e1570b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e4e1570b

Branch: refs/heads/master
Commit: e4e1570b83ca0141fc19421a0bd5217ebb37f512
Parents: 51918bb
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:15:19 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e4e1570b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 cc2781d7d -> 3de5f8027


PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3de5f802
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3de5f802
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3de5f802

Branch: refs/heads/4.x-HBase-1.1
Commit: 3de5f8027bff96ea29f0e26d09a583649d6cf44e
Parents: cc2781d
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:16:59 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3de5f802/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 83ed28f4e -> 87421ede3


PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/87421ede
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/87421ede
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/87421ede

Branch: refs/heads/4.x-HBase-0.98
Commit: 87421ede3e9c22f9e567950c6a0acf735437f3a4
Parents: 83ed28f
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:18:53 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/87421ede/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[2/2] phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/30e6673d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/30e6673d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/30e6673d

Branch: refs/heads/4.8-HBase-1.2
Commit: 30e6673d95b95d7b17c2ae73aa8be7ab08725d5b
Parents: eb871be
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:24:15 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/30e6673d/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.2 8c6842a4b -> 30e6673d9


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/eb871bee
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/eb871bee
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/eb871bee

Branch: refs/heads/4.8-HBase-1.2
Commit: eb871beefef375dad2059f4ba553bdd2407835f8
Parents: 8c6842a
Author: James Taylor 
Authored: Thu Nov 3 16:45:22 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:23:56 2016 -0700

--
 .../main/java/org/apache/phoenix/compile/ScanRanges.java  | 10 +-
 .../org/apache/phoenix/compile/QueryOptimizerTest.java| 10 ++
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/eb871bee/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 95eee60..19a4692 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -567,9 +567,17 @@ public class ScanRanges {
 }
 
 public int getBoundPkColumnCount() {
-return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : getBoundPkSpan(ranges, slotSpan);
+return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
+public int getBoundMinMaxSlotCount() {
+if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
+return 0;
+}
+// The minMaxRange is always a single key
+return 1 + slotSpan[0];
+}
+
 public int getBoundSlotCount() {
 int count = 0;
 boolean hasUnbound = false;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/eb871bee/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
index 5f452f1..c50a932 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
@@ -637,6 +637,16 @@ public class QueryOptimizerTest extends 
BaseConnectionlessQueryTest {
 assertEquals("IDX", 
plan.getTableRef().getTable().getTableName().getString());
 }
 
+@Test
+public void testTableUsedWithQueryMore() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE t (k1 CHAR(3) NOT NULL, 
k2 CHAR(15) NOT NULL, k3 DATE NOT NULL, k4 CHAR(15) NOT NULL, CONSTRAINT pk 
PRIMARY KEY (k1,k2,k3,k4))");
+conn.createStatement().execute("CREATE INDEX idx ON t(k1,k3,k2,k4)");
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = stmt.optimizeQuery("SELECT * FROM t WHERE 
(k1,k2,k3,k4) > ('001','001xx03DHml',to_date('2015-10-21 
09:50:55.0'),'017xx022FuI')");
+assertEquals("T", 
plan.getTableRef().getTable().getTableName().getString());
+}
+
 private void assertPlanDetails(PreparedStatement stmt, String 
expectedPkCols, String expectedPkColsDataTypes, boolean expectedHasOrderBy, int 
expectedLimit) throws SQLException {
 Connection conn = stmt.getConnection();
 QueryPlan plan = PhoenixRuntime.getOptimizedQueryPlan(stmt);



[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.1 0c878fa01 -> e5eda96b3


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/72468988
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/72468988
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/72468988

Branch: refs/heads/4.8-HBase-1.1
Commit: 7246898826b3fc2fa8774635368615560c4d41c9
Parents: 0c878fa
Author: James Taylor 
Authored: Thu Nov 3 16:45:22 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:25:33 2016 -0700

--
 .../main/java/org/apache/phoenix/compile/ScanRanges.java  | 10 +-
 .../org/apache/phoenix/compile/QueryOptimizerTest.java| 10 ++
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/72468988/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 95eee60..19a4692 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -567,9 +567,17 @@ public class ScanRanges {
 }
 
 public int getBoundPkColumnCount() {
-return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : getBoundPkSpan(ranges, slotSpan);
+return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
+public int getBoundMinMaxSlotCount() {
+if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
+return 0;
+}
+// The minMaxRange is always a single key
+return 1 + slotSpan[0];
+}
+
 public int getBoundSlotCount() {
 int count = 0;
 boolean hasUnbound = false;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/72468988/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
index 5f452f1..c50a932 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
@@ -637,6 +637,16 @@ public class QueryOptimizerTest extends 
BaseConnectionlessQueryTest {
 assertEquals("IDX", 
plan.getTableRef().getTable().getTableName().getString());
 }
 
+@Test
+public void testTableUsedWithQueryMore() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE t (k1 CHAR(3) NOT NULL, 
k2 CHAR(15) NOT NULL, k3 DATE NOT NULL, k4 CHAR(15) NOT NULL, CONSTRAINT pk 
PRIMARY KEY (k1,k2,k3,k4))");
+conn.createStatement().execute("CREATE INDEX idx ON t(k1,k3,k2,k4)");
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = stmt.optimizeQuery("SELECT * FROM t WHERE 
(k1,k2,k3,k4) > ('001','001xx03DHml',to_date('2015-10-21 
09:50:55.0'),'017xx022FuI')");
+assertEquals("T", 
plan.getTableRef().getTable().getTableName().getString());
+}
+
 private void assertPlanDetails(PreparedStatement stmt, String 
expectedPkCols, String expectedPkColsDataTypes, boolean expectedHasOrderBy, int 
expectedLimit) throws SQLException {
 Connection conn = stmt.getConnection();
 QueryPlan plan = PhoenixRuntime.getOptimizedQueryPlan(stmt);



[2/2] phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e5eda96b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e5eda96b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e5eda96b

Branch: refs/heads/4.8-HBase-1.1
Commit: e5eda96b3163831a3bec0822a7101627789c6cfd
Parents: 7246898
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:25:38 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5eda96b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[2/2] phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d56e06c9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d56e06c9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d56e06c9

Branch: refs/heads/4.8-HBase-1.0
Commit: d56e06c9ff4243d96774c28454c306490e5e41ea
Parents: 158e582
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:27:40 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56e06c9/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.0 679ad470c -> d56e06c9f


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/158e5829
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/158e5829
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/158e5829

Branch: refs/heads/4.8-HBase-1.0
Commit: 158e5829c7c8185a3152826a20a16692630e3604
Parents: 679ad47
Author: James Taylor 
Authored: Thu Nov 3 16:45:22 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:27:36 2016 -0700

--
 .../main/java/org/apache/phoenix/compile/ScanRanges.java  | 10 +-
 .../org/apache/phoenix/compile/QueryOptimizerTest.java| 10 ++
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/158e5829/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 95eee60..19a4692 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -567,9 +567,17 @@ public class ScanRanges {
 }
 
 public int getBoundPkColumnCount() {
-return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : getBoundPkSpan(ranges, slotSpan);
+return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
+public int getBoundMinMaxSlotCount() {
+if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
+return 0;
+}
+// The minMaxRange is always a single key
+return 1 + slotSpan[0];
+}
+
 public int getBoundSlotCount() {
 int count = 0;
 boolean hasUnbound = false;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/158e5829/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
index 5f452f1..c50a932 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
@@ -637,6 +637,16 @@ public class QueryOptimizerTest extends 
BaseConnectionlessQueryTest {
 assertEquals("IDX", 
plan.getTableRef().getTable().getTableName().getString());
 }
 
+@Test
+public void testTableUsedWithQueryMore() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE t (k1 CHAR(3) NOT NULL, 
k2 CHAR(15) NOT NULL, k3 DATE NOT NULL, k4 CHAR(15) NOT NULL, CONSTRAINT pk 
PRIMARY KEY (k1,k2,k3,k4))");
+conn.createStatement().execute("CREATE INDEX idx ON t(k1,k3,k2,k4)");
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = stmt.optimizeQuery("SELECT * FROM t WHERE 
(k1,k2,k3,k4) > ('001','001xx03DHml',to_date('2015-10-21 
09:50:55.0'),'017xx022FuI')");
+assertEquals("T", 
plan.getTableRef().getTable().getTableName().getString());
+}
+
 private void assertPlanDetails(PreparedStatement stmt, String 
expectedPkCols, String expectedPkColsDataTypes, boolean expectedHasOrderBy, int 
expectedLimit) throws SQLException {
 Connection conn = stmt.getConnection();
 QueryPlan plan = PhoenixRuntime.getOptimizedQueryPlan(stmt);



[2/2] phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread jamestaylor
PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b59e65c6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b59e65c6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b59e65c6

Branch: refs/heads/4.8-HBase-0.98
Commit: b59e65c6dff232c8466c792e97c05c30486186df
Parents: 2c7f6c8
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:29:15 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b59e65c6/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-0.98 0b56aa9f6 -> b59e65c6d


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2c7f6c81
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2c7f6c81
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2c7f6c81

Branch: refs/heads/4.8-HBase-0.98
Commit: 2c7f6c816fd299f04f869a96b32d31228237836a
Parents: 0b56aa9
Author: James Taylor 
Authored: Thu Nov 3 16:45:22 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:29:11 2016 -0700

--
 .../main/java/org/apache/phoenix/compile/ScanRanges.java  | 10 +-
 .../org/apache/phoenix/compile/QueryOptimizerTest.java| 10 ++
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c7f6c81/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 95eee60..19a4692 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -567,9 +567,17 @@ public class ScanRanges {
 }
 
 public int getBoundPkColumnCount() {
-return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : getBoundPkSpan(ranges, slotSpan);
+return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
+public int getBoundMinMaxSlotCount() {
+if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
+return 0;
+}
+// The minMaxRange is always a single key
+return 1 + slotSpan[0];
+}
+
 public int getBoundSlotCount() {
 int count = 0;
 boolean hasUnbound = false;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c7f6c81/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
index 5f452f1..c50a932 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
@@ -637,6 +637,16 @@ public class QueryOptimizerTest extends 
BaseConnectionlessQueryTest {
 assertEquals("IDX", 
plan.getTableRef().getTable().getTableName().getString());
 }
 
+@Test
+public void testTableUsedWithQueryMore() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE t (k1 CHAR(3) NOT NULL, 
k2 CHAR(15) NOT NULL, k3 DATE NOT NULL, k4 CHAR(15) NOT NULL, CONSTRAINT pk 
PRIMARY KEY (k1,k2,k3,k4))");
+conn.createStatement().execute("CREATE INDEX idx ON t(k1,k3,k2,k4)");
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = stmt.optimizeQuery("SELECT * FROM t WHERE 
(k1,k2,k3,k4) > ('001','001xx03DHml',to_date('2015-10-21 
09:50:55.0'),'017xx022FuI')");
+assertEquals("T", 
plan.getTableRef().getTable().getTableName().getString());
+}
+
 private void assertPlanDetails(PreparedStatement stmt, String 
expectedPkCols, String expectedPkColsDataTypes, boolean expectedHasOrderBy, int 
expectedLimit) throws SQLException {
 Connection conn = stmt.getConnection();
 QueryPlan plan = PhoenixRuntime.getOptimizedQueryPlan(stmt);



Build failed in Jenkins: Phoenix-4.8-HBase-1.2 #46

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 289 lines...]
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.159 sec - in 
org.apache.phoenix.hbase.index.covered.example.TestCoveredColumnIndexCodec
Running org.apache.phoenix.hbase.index.covered.example.TestColumnTracker
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec - in 
org.apache.phoenix.hbase.index.covered.example.TestColumnTracker
Running 
org.apache.phoenix.hbase.index.covered.example.TestCoveredIndexSpecifierBuilder
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in 
org.apache.phoenix.hbase.index.covered.example.TestCoveredIndexSpecifierBuilder
Running org.apache.phoenix.hbase.index.covered.data.TestIndexMemStore
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.035 sec - in 
org.apache.phoenix.hbase.index.covered.data.TestIndexMemStore
Running org.apache.phoenix.hbase.index.covered.update.TestIndexUpdateManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec - in 
org.apache.phoenix.hbase.index.covered.update.TestIndexUpdateManager
Running org.apache.phoenix.hbase.index.covered.TestLocalTableState
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.227 sec - in 
org.apache.phoenix.hbase.index.covered.TestLocalTableState
Running org.apache.phoenix.hbase.index.util.TestIndexManagementUtil
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.053 sec - in 
org.apache.phoenix.hbase.index.util.TestIndexManagementUtil
Running 
org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.914 sec - in 
org.apache.phoenix.util.json.JsonUpsertExecutorTest
Running org.apache.phoenix.hbase.index.write.TestParalleIndexWriter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.527 sec - in 
org.apache.phoenix.hbase.index.write.TestParalleIndexWriter
Running org.apache.phoenix.hbase.index.write.TestCachingHTableFactory
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.113 sec - in 
org.apache.phoenix.hbase.index.write.TestCachingHTableFactory
Running org.apache.phoenix.hbase.index.write.TestIndexWriter
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x0007e230, 291504128, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 291504128 bytes for 
committing reserved memory.
# An error report file with more information is saved as:
# 

Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.607 sec - in 
org.apache.phoenix.hbase.index.write.recovery.TestPerRegionIndexWriteCache
Running org.apache.phoenix.hbase.index.write.TestWALRecoveryCaching
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.004 sec - in 
org.apache.phoenix.hbase.index.write.TestWALRecoveryCaching
Running org.apache.phoenix.hbase.index.write.TestParalleWriterIndexCommitter
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.189 sec - in 
org.apache.phoenix.hbase.index.write.TestParalleWriterIndexCommitter
Running org.apache.phoenix.hbase.index.parallel.TestThreadPoolManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 sec - in 
org.apache.phoenix.hbase.index.parallel.TestThreadPoolManager
Running org.apache.phoenix.hbase.index.parallel.TestThreadPoolBuilder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.phoenix.hbase.index.parallel.TestThreadPoolBuilder
Running org.apache.phoenix.filter.DistinctPrefixFilterTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.042 sec - in 
org.apache.phoenix.filter.DistinctPrefixFilterTest
Running org.apache.phoenix.filter.SkipScanFilterTest
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.118 sec - in 
org.apache.phoenix.filter.SkipScanFilterTest
Running org.apache.phoenix.filter.SkipScanFilterIntersectTest
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec - in 
org.apache.phoenix.filter.SkipScanFilterIntersectTest
Running org.apache.phoenix.filter.SkipScanBigFilterTest
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.911 sec - in 
org.apache.phoenix.jdbc.SecureUserConnectionsTest
Running org.apache.phoenix.cache.JodaTimezoneCacheTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.482 sec - in 
org.apache.phoenix.filter.SkipScanBigFilterTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.022 sec - 

[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-0.98 [created] 9fd36c759
  refs/tags/v4.8.2-HBase-0.98-rc0 [created] b59e65c6d


[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-1.0 [created] 0156b018a
  refs/tags/v4.8.2-HBase-1.0-rc0 [created] d56e06c9f


phoenix git commit: Prepare 4.8.2

2016-11-04 Thread larsh
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-0.98 b59e65c6d -> a9a3416a4


Prepare 4.8.2


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a9a3416a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a9a3416a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a9a3416a

Branch: refs/heads/4.8-HBase-0.98
Commit: a9a3416a445a0a3e940504159190932ed2c53dfa
Parents: b59e65c
Author: Lars Hofhansl 
Authored: Fri Nov 4 10:26:00 2016 -0700
Committer: Lars Hofhansl 
Committed: Fri Nov 4 10:26:00 2016 -0700

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 5e79cd0..5f4c2db 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 79f2313..3f61071 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 6257fe8..7e6e3f7 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 0cc1ef9..80ed0a9 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 1829e14..fb10196 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index bb419e9..56501ca 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.8.1-HBase-0.98
+   4.8.2-HBase-0.98

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 2aeedc9..55fd32f 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a9a3416a/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 2cea4eb..c844c33 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-0.98
+4.8.2-HBase-0.98
   
   phoenix-queryserver-client
 

[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.2-HBase-0.98-rc0 b59e65c6d -> a9a3416a4


[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.2-HBase-1.0-rc0 d56e06c9f -> 051fb52b2


phoenix git commit: Prepare 4.8.2

2016-11-04 Thread larsh
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.0 d56e06c9f -> 051fb52b2


Prepare 4.8.2


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/051fb52b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/051fb52b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/051fb52b

Branch: refs/heads/4.8-HBase-1.0
Commit: 051fb52b26ac2a87eaad33579f3cd967b40c0c17
Parents: d56e06c
Author: Lars Hofhansl 
Authored: Fri Nov 4 10:28:50 2016 -0700
Committer: Lars Hofhansl 
Committed: Fri Nov 4 10:28:50 2016 -0700

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 07bf9fe..5175c80 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 06723ed..4fac801 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 9af3668..ba725db 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 2daf13d..e054fc0 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index de666a4..a7e84d0 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index 9913cf0..b1b8afa 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.8.1-HBase-1.0
+   4.8.2-HBase-1.0

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 9dd3d63..855c730 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/051fb52b/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 2aa2d4d..a66061c 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.0
+4.8.2-HBase-1.0
   
   phoenix-queryserver-client
   Phoenix Query Se

phoenix git commit: Prepare 4.8.2

2016-11-04 Thread larsh
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.1 e5eda96b3 -> 5f6929edc


Prepare 4.8.2


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5f6929ed
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5f6929ed
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5f6929ed

Branch: refs/heads/4.8-HBase-1.1
Commit: 5f6929edca9c0bfe4908b2070ccad5805135a2b6
Parents: e5eda96
Author: Lars Hofhansl 
Authored: Fri Nov 4 10:30:49 2016 -0700
Committer: Lars Hofhansl 
Committed: Fri Nov 4 10:30:49 2016 -0700

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index c31cf40..f3ae34f 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index f1fc626..1d73210 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 2f99f9d..d36769c 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index cb41426..f42ec51 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 8c8ea48..7a9be95 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index 68db78b..89182f4 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.8.1-HBase-1.1
+   4.8.2-HBase-1.1

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 53526d3..e4dd929 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5f6929ed/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index c53a93f..039d469 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.1
+4.8.2-HBase-1.1
   
   phoenix-queryserver-client
   Phoenix Query Se

[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-1.1 [created] 83b108188
  refs/tags/v4.8.2-HBase-1.1-rc0 [created] 5f6929edc


[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-1.1 83b108188 -> 3fc8f617b


[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-1.0 0156b018a -> acdd0b8e7


[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-0.98 9fd36c759 -> aeadd2384


phoenix git commit: Prepare 4.8.2

2016-11-04 Thread larsh
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.2 30e6673d9 -> 192de60cf


Prepare 4.8.2


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/192de60c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/192de60c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/192de60c

Branch: refs/heads/4.8-HBase-1.2
Commit: 192de60cfbcb8ca8074ee05906086a685798c76d
Parents: 30e6673
Author: Lars Hofhansl 
Authored: Fri Nov 4 10:37:07 2016 -0700
Committer: Lars Hofhansl 
Committed: Fri Nov 4 10:37:07 2016 -0700

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index df4b9fc..ede80c9 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 54af6b9..ad75a07 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 542a263..cfb01fb 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 71bbfc8..8eaf3d6 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index cd3f40d..ba7e230 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index e548fbd..0291c0b 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.8.1-HBase-1.2
+   4.8.2-HBase-1.2

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index c1c608a..067d8fc 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/192de60c/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 4a46b41..c89069a 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.8.1-HBase-1.2
+4.8.2-HBase-1.2
   
   phoenix-queryserver-client
   Phoenix Query Se

[phoenix] Git Push Summary

2016-11-04 Thread larsh
Repository: phoenix
Updated Tags:  refs/tags/v4.8.1-HBase-1.2 [created] f1f7a1f7d
  refs/tags/v4.8.2-HBase-1.2-rc0 [created] 192de60cf


Build failed in Jenkins: Phoenix-4.8-HBase-1.1 #42

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 545 lines...]
Running org.apache.phoenix.end2end.HashJoinLocalIndexIT
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.168 sec - 
in org.apache.phoenix.end2end.GroupByCaseIT
Running org.apache.phoenix.end2end.HashJoinMoreIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.665 sec - in 
org.apache.phoenix.end2end.HashJoinLocalIndexIT
Running org.apache.phoenix.end2end.InListIT
Tests run: 47, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 135.211 sec - 
in org.apache.phoenix.end2end.DateTimeIT
Running org.apache.phoenix.end2end.InMemoryOrderByIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.335 sec - in 
org.apache.phoenix.end2end.InMemoryOrderByIT
Running org.apache.phoenix.end2end.InstrFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.013 sec - in 
org.apache.phoenix.end2end.InstrFunctionIT
Running org.apache.phoenix.end2end.IsNullIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.004 sec - in 
org.apache.phoenix.end2end.IsNullIT
Running org.apache.phoenix.end2end.LastValueFunctionIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.39 sec - in 
org.apache.phoenix.end2end.LastValueFunctionIT
Running org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.79 sec - in 
org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MapReduceIT
Tests run: 99, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 139.496 sec - 
in org.apache.phoenix.end2end.HashJoinIT
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 96.371 sec - 
in org.apache.phoenix.end2end.InListIT
Running org.apache.phoenix.end2end.MappingTableDataTypeIT
Running org.apache.phoenix.end2end.ModulusExpressionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.808 sec - in 
org.apache.phoenix.end2end.MapReduceIT
Running org.apache.phoenix.end2end.NamespaceSchemaMappingIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 120.379 sec - 
in org.apache.phoenix.end2end.HashJoinMoreIT
Running org.apache.phoenix.end2end.OrderByIT
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.105 sec - in 
org.apache.phoenix.end2end.ModulusExpressionIT
Running org.apache.phoenix.end2end.PercentileIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.841 sec - in 
org.apache.phoenix.end2end.MappingTableDataTypeIT
Running org.apache.phoenix.end2end.PhoenixRuntimeIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.815 sec - in 
org.apache.phoenix.end2end.NamespaceSchemaMappingIT
Running org.apache.phoenix.end2end.QueryExecWithoutSCNIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.957 sec - in 
org.apache.phoenix.end2end.QueryExecWithoutSCNIT
Running org.apache.phoenix.end2end.RegexpReplaceFunctionIT
Tests run: 17, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.284 sec - 
in org.apache.phoenix.end2end.PercentileIT
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.117 sec - in 
org.apache.phoenix.end2end.RegexpReplaceFunctionIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.161 sec - in 
org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.944 sec - in 
org.apache.phoenix.end2end.OrderByIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.485 sec - in 
org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.36 sec - in 
org.apache.phoenix.end2end.PhoenixRuntimeIT
Running org.apache.phoenix.end2end.SortMergeJoinMoreIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.105 sec - in 
org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Running org.apache.phoenix.end2end.SpooledOrderByIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.213 sec - 
in org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SpooledSortMergeJoinIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.216 sec - in 
org.apache.phoenix.end2end.SpooledOrderByIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.294 sec - in 
org.apache.phoenix.end2end.SortMergeJoinMoreIT
Running org.apache.phoenix.end2end.S

Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-4.8-HBase-1.1/42/

2016-11-04 Thread Apache Jenkins Server
[...truncated 22 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-4.8-HBase-1.1/42/


Affected test class(es):
Set(['org.apache.phoenix.end2end.IndexToolIT', 
'org.apache.phoenix.end2end.MutableIndexToolIT', 
'org.apache.phoenix.end2end.AlterTableIT', 
'org.apache.phoenix.end2end.ParallelIteratorsIT', 
'org.apache.phoenix.end2end.AlterTableWithViewsIT', 
'org.apache.phoenix.end2end.CsvBulkLoadToolIT', 
'org.apache.phoenix.end2end.index.IndexIT'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


Build failed in Jenkins: Phoenix-4.8-HBase-1.0 #39

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 845 lines...]


Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 92.426 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 21, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 150.473 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 591.135 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT
Tests run: 136, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 873.86 sec - 
in org.apache.phoenix.end2end.index.IndexIT

Results :

Tests in error: 
  LocalIndexIT.testLocalIndexRoundTrip:155 » PhoenixIO 
org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexRoundTrip:155 » PhoenixIO 
org.apache.phoenix.except...

Tests run: 1234, Failures: 0, Errors: 2, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.121 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.041 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.572 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.44 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.798 sec - in 
org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.714 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.885 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.254 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.199 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.148 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.48 sec - in 
org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.008 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.251 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 33.576 sec - 
in org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.542 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.333 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.491 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.QueryMoreIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.p

Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-4.8-HBase-1.0/39/

2016-11-04 Thread Apache Jenkins Server
[...truncated 23 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-4.8-HBase-1.0/39/


Affected test class(es):
Set(['org.apache.phoenix.end2end.SortOrderIT'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


Build failed in Jenkins: Phoenix-4.8-HBase-0.98 #41

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 779 lines...]
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 107.601 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 602.966 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT
Tests run: 136, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 945.742 sec - 
in org.apache.phoenix.end2end.index.IndexIT

Results :

Tests in error: 
  LocalIndexIT.testLocalIndexRoundTrip:156 » PhoenixIO 
org.apache.phoenix.except...

Tests run: 1234, Failures: 0, Errors: 1, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.802 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.876 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.726 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.375 sec - 
in org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.725 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.774 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.835 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.429 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.464 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.629 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.222 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.29 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.597 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.488 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.172 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.936 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.303 sec - 
in org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.437 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.886 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.146 sec - in 
org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.R

Build failed in Jenkins: Phoenix-4.8-HBase-1.0 #40

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[larsh] Prepare 4.8.2

--
[...truncated 1039 lines...]
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT$FailingRegionObserver.preBatchMutate(MutableIndexFailureIT.java:400)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1022)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1706)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1781)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1018)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2759)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2534)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2488)
at 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2492)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:662)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:626)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:1874)
at 
org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:128)
at 
org.apache.hadoop.hbase.client.MultiServerCallable.call(MultiServerCallable.java:53)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl$SingleServerRequestRunnable.run(AsyncProcess.java:701)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
: 2 times, 
at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:227)
at 
org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:207)
at 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.getErrors(AsyncProcess.java:1568)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:960)
at org.apache.hadoop.hbase.client.HTable.batch(HTable.java:974)
at 
org.apache.hadoop.hbase.client.HTableWrapper.batch(HTableWrapper.java:255)
at 
org.apache.phoenix.execute.DelegateHTable.batch(DelegateHTable.java:94)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:169)
at 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter$1.call(ParallelWriterIndexCommitter.java:134)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
... 1 more
: 1 time, 
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.helpTestWriteFailureDisablesIndex(MutableIndexFailureIT.java:225)
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT.testWriteFailureDisablesIndex(MutableIndexFailureIT.java:127)

testWriteFailureDisablesIndex[transactional = false, localIndex = true, 
isNamespaceMapped = 
true](org.apache.phoenix.end2end.index.MutableIndexFailureIT)  Time elapsed: 
12.934 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException
at 
org.apache.phoenix.end2end.index.MutableIndexFailureIT$FailingRegionObserver.preBatchMutate(MutableIndexFailureIT.java:400)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:1022)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1706)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1781)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:1018)
at 
org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2759)

Build failed in Jenkins: Phoenix-4.8-HBase-0.98 #42

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[larsh] Prepare 4.8.2

--
[...truncated 852 lines...]
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 606.89 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT
Tests run: 136, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 955.601 sec - 
in org.apache.phoenix.end2end.index.IndexIT

Results :

Tests in error: 
  LocalIndexIT.testLocalIndexRoundTrip:156 » PhoenixIO 
org.apache.phoenix.except...
  LocalIndexIT.testLocalIndexRoundTrip:156 » PhoenixIO 
org.apache.phoenix.except...

Tests run: 1234, Failures: 0, Errors: 2, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.51 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.AutoCommitIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.382 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.652 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.377 sec - 
in org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.509 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.973 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.251 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.422 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.293 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.32 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.765 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 36.957 sec - 
in org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.932 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 10.057 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.543 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.412 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.562 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.363 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.761 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.081 sec - in 
org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.307 sec - in 
org.apache.phoenix.end2end.ReadOnlyIT
Running org.apache.phoenix.end2end.Reve

Build failed in Jenkins: Phoenix | Master #1473

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 823753 lines...]
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] 
org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec 
org.apache.tephra.snapshot.DefaultSnapshotCodec for snapshots of version 1
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] 
org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec 
org.apache.tephra.snapshot.SnapshotCodecV2 for snapshots of version 2
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] 
org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec 
org.apache.tephra.snapshot.SnapshotCodecV3 for snapshots of version 3
2016-11-04 19:44:13,617 DEBUG [tx-state-refresh] 
org.apache.tephra.snapshot.SnapshotCodecProvider(80): Using snapshot codec 
org.apache.tephra.snapshot.SnapshotCodecV4 for snapshots of version 4
2016-11-04 19:44:13,620 INFO  [tx-state-refresh] 
org.apache.tephra.coprocessor.TransactionStateCache(103): Failed to initialize 
TransactionStateCache due to: java.lang.IllegalStateException: Snapshot 
directory is not configured.  Please set data.tx.snapshot.dir in configuration.
2016-11-04 19:44:13,620 ERROR [HDFSTransactionStateStorage STARTING] 
org.apache.zookeeper.server.NIOServerCnxnFactory$1(44): Thread 
Thread[HDFSTransactionStateStorage STARTING,5,main] died
java.lang.IllegalStateException: Snapshot directory is not configured.  Please 
set data.tx.snapshot.dir in configuration.
at 
com.google.common.base.Preconditions.checkState(Preconditions.java:149)
at 
org.apache.tephra.persist.HDFSTransactionStateStorage.startUp(HDFSTransactionStateStorage.java:93)
at 
com.google.common.util.concurrent.AbstractIdleService$1$1.run(AbstractIdleService.java:43)
at java.lang.Thread.run(Thread.java:745)
2016-11-04 19:44:14,816 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:16,101 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:17,817 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:18,143 INFO  [B.defaultRpcServer.handler=0,queue=0,port=52015] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, 
waiting for 1  actions to finish
2016-11-04 19:44:18,150 INFO  [B.defaultRpcServer.handler=1,queue=0,port=52015] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, 
waiting for 1  actions to finish
2016-11-04 19:44:18,261 INFO  [B.defaultRpcServer.handler=4,queue=0,port=52015] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, 
waiting for 1  actions to finish
2016-11-04 19:44:18,419 INFO  [B.defaultRpcServer.handler=2,queue=0,port=52015] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, 
waiting for 1  actions to finish
2016-11-04 19:44:18,424 INFO  [B.defaultRpcServer.handler=3,queue=0,port=52015] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): #9, 
waiting for 1  actions to finish
2016-11-04 19:44:19,103 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:20,817 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@11682d0d]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,109 DEBUG 
[org.apache.hadoop.hdfs.server.blockmanagement.BlockManager$ReplicationMonitor@7dae7a58]
 org.apache.hadoop.hdfs.server.blockmanagement.BlockManager(1500): BLOCK* 
neededReplications = 0 pendingReplications = 0
2016-11-04 19:44:22,709 INFO  [B.defaultRpcServer.handler=2,queue=0,port=50930] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1009, waiting for 1  actions to finish
2016-11-04 19:44:22,861 INFO  [B.defaultRpcServer.handler=1,queue=0,port=50930] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1009, waiting for 1  actions to finish
2016-11-04 19:44:22,922 INFO  [B.defaultRpcServer.handler=0,queue=0,port=50930] 
org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl(1635): 
#1009, waiting for 1  actions to finish
2016-11-04 19:44:23,261 

Build failed in Jenkins: Phoenix-4.8-HBase-1.1 #43

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[larsh] Prepare 4.8.2

--
[...truncated 785 lines...]
Tests run: 136, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2,004.775 sec 
- in org.apache.phoenix.end2end.index.IndexIT

Results :

Tests in error: 
  LocalIndexIT.testLocalIndexRoundTrip:155 » PhoenixIO 
org.apache.phoenix.except...

Tests run: 1234, Failures: 0, Errors: 1, Skipped: 5

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(HBaseManagedTimeTableReuseTest) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.AlterSessionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.126 sec - in 
org.apache.phoenix.end2end.AlterSessionIT
Running org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ArrayFillFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.86 sec - in 
org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
Running org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.ArithmeticQueryIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.485 sec - in 
org.apache.phoenix.end2end.AutoCommitIT
Running org.apache.phoenix.end2end.ArrayToStringFunctionIT
Running org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.455 sec - in 
org.apache.phoenix.end2end.CbrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.514 sec - 
in org.apache.phoenix.end2end.ArrayFillFunctionIT
Running org.apache.phoenix.end2end.DecodeFunctionIT
Tests run: 36, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.061 sec - 
in org.apache.phoenix.end2end.ArrayToStringFunctionIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.104 sec - in 
org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
Running org.apache.phoenix.end2end.DynamicFamilyIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.463 sec - in 
org.apache.phoenix.end2end.DecodeFunctionIT
Running org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.163 sec - in 
org.apache.phoenix.end2end.DynamicUpsertIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.013 sec - in 
org.apache.phoenix.end2end.DynamicFamilyIT
Running org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 65.404 sec - 
in org.apache.phoenix.end2end.ArraysWithNullsIT
Running org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.136 sec - in 
org.apache.phoenix.end2end.GetSetByteBitFunctionEnd2EndIT
Running org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.LikeExpressionIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.266 sec - in 
org.apache.phoenix.end2end.MD5FunctionIT
Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.DistinctPrefixFilterIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.512 sec - in 
org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
Running org.apache.phoenix.end2end.NthValueFunctionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.668 sec - in 
org.apache.phoenix.end2end.LikeExpressionIT
Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.301 sec - in 
org.apache.phoenix.end2end.FirstValueFunctionIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.543 sec - in 
org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
Running org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.143 sec - in 
org.apache.phoenix.end2end.PrimitiveTypeIT
Running org.apache.phoenix.end2end.QueryMoreIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.624 sec - in 
org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
Running org.apache.phoenix.end2end.RTrimFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.926 sec - in 
org.apache.phoenix.end2end.RTrimFunctionIT
Running org.apache.phoenix.end2end.ReadOnlyIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 24.837 sec - in 
org.apache.phoenix.end2end.DistinctPrefixFilterIT
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.501 sec - in 
org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elap

Jenkins build is back to normal : Phoenix-4.8-HBase-1.2 #47

2016-11-04 Thread Apache Jenkins Server
See 



svn commit: r16843 - in /dev/phoenix: apache-phoenix-4.8.2-HBase-0.98/ apache-phoenix-4.8.2-HBase-0.98/bin/ apache-phoenix-4.8.2-HBase-0.98/src/ apache-phoenix-4.8.2-HBase-1.0/ apache-phoenix-4.8.2-HB

2016-11-04 Thread larsh
Author: larsh
Date: Fri Nov  4 20:41:00 2016
New Revision: 16843

Log:
RC0 for Phoenix 4.8.2

Added:
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/src/

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/src/apache-phoenix-4.8.2-HBase-0.98-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/src/apache-phoenix-4.8.2-HBase-0.98-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/src/apache-phoenix-4.8.2-HBase-0.98-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/src/apache-phoenix-4.8.2-HBase-0.98-src.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/bin/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/bin/apache-phoenix-4.8.2-HBase-1.0-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/bin/apache-phoenix-4.8.2-HBase-1.0-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/bin/apache-phoenix-4.8.2-HBase-1.0-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/bin/apache-phoenix-4.8.2-HBase-1.0-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/src/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/src/apache-phoenix-4.8.2-HBase-1.0-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/src/apache-phoenix-4.8.2-HBase-1.0-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/src/apache-phoenix-4.8.2-HBase-1.0-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/src/apache-phoenix-4.8.2-HBase-1.0-src.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/bin/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/bin/apache-phoenix-4.8.2-HBase-1.1-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/bin/apache-phoenix-4.8.2-HBase-1.1-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/bin/apache-phoenix-4.8.2-HBase-1.1-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/bin/apache-phoenix-4.8.2-HBase-1.1-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/src/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/src/apache-phoenix-4.8.2-HBase-1.1-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/src/apache-phoenix-4.8.2-HBase-1.1-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/src/apache-phoenix-4.8.2-HBase-1.1-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/src/apache-phoenix-4.8.2-HBase-1.1-src.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/bin/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/bin/apache-phoenix-4.8.2-HBase-1.2-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/bin/apache-phoenix-4.8.2-HBase-1.2-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/bin/apache-phoenix-4.8.2-HBase-1.2-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/bin/apache-phoenix-4.8.2-HBase-1.2-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/src/

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/src/apache-phoenix-4.8.2-HBase-1.2-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/src/apache-phoenix-4.8.2-HBase-1.2-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/src/apache-phoenix-4.8.2-HBase-1.2-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/src/apache-phoenix-4.8.2-HBase-1.2-src.tar.gz.sha

Added: 
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz
==
Binary file - no diff available.

Propchange: 
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: 
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.asc
==
--- 
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.asc
 (added)
+++ 
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/bin/apache-phoenix-4.8.2-HBase-0.98-bin.tar.gz.asc
 Fri Nov  4 20:41:00 2016
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1.4.11 (GNU/Linux)
+
+iQIcBAABCgAGBQJYHMl9AAoJENC+uMXHz+MoFmUQAJPz5XlmFTcWNM4OviU13tvq
+GnReCnflYaDWAI/Jm6XB06q509Z

svn commit: r16844 - in /dev/phoenix: apache-phoenix-4.8.2-HBase-0.98-rc0/ apache-phoenix-4.8.2-HBase-0.98/ apache-phoenix-4.8.2-HBase-1.0-rc0/ apache-phoenix-4.8.2-HBase-1.0/ apache-phoenix-4.8.2-HBa

2016-11-04 Thread larsh
Author: larsh
Date: Fri Nov  4 20:45:05 2016
New Revision: 16844

Log:
Fix 4.8.2RC0 directory names

Added:
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98-rc0/
  - copied from r16843, dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.0-rc0/
  - copied from r16843, dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.1-rc0/
  - copied from r16843, dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.2-rc0/
  - copied from r16843, dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/
Removed:
dev/phoenix/apache-phoenix-4.8.2-HBase-0.98/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.0/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.1/
dev/phoenix/apache-phoenix-4.8.2-HBase-1.2/



Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1365

2016-11-04 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3199 ServerCacheClient sends cache to all regions 
unnecessarily

--
[...truncated 1134 lines...]
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE6
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,407.29 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1202405: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478280011578.07bc3beb57d1c0bcc41ce2b994ab., 
hostname=asf910.gq1.ygridcore.net,55185,147828634, seqNum=1
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:309)
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(CsvBulkLoadToolIT.java:297)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1202405: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478280011578.07bc3beb57d1c0bcc41ce2b994ab., 
hostname=asf910.gq1.ygridcore.net,55185,147828634, seqNum=1
Caused by: java.net.SocketTimeoutException: Call to 
asf910.gq1.ygridcore.net/67.195.81.154:55185 failed because 
java.net.SocketTimeoutException: 120 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/67.195.81.154:45997 remote=asf910.gq1.ygridcore.net/67.195.81.154:55185]
Caused by: java.net.SocketTimeoutException: 120 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/67.195.81.154:45997 
remote=asf910.gq1.ygridcore.net/67.195.81.154:55185]

testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,407.29 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1222418: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478280011578.07bc3beb57d1c0bcc41ce2b994ab., 
hostname=asf910.gq1.ygridcore.net,55185,147828634, seqNum=1
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1222418: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478280011578.07bc3beb57d1c0bcc41ce2b994ab., 
hostname=asf910.gq1.ygridcore.net,55185,147828634, seqNum=1
Caused by: java.io.IOException: 
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE5
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)

[28/50] [abbrv] phoenix git commit: PHOENIX-3387 Hive PhoenixStorageHandler fails with join on numeric fields

2016-11-04 Thread samarth
PHOENIX-3387 Hive PhoenixStorageHandler fails with join on numeric fields

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3bb1a2b1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3bb1a2b1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3bb1a2b1

Branch: refs/heads/encodecolumns2
Commit: 3bb1a2b15ceee9d9b6c2f0e5fd66b8dcfb919d70
Parents: 1ed90b6
Author: Sergey Soldatov 
Authored: Thu Oct 20 23:42:39 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:58:21 2016 -0700

--
 .../it/java/org/apache/phoenix/hive/HiveTestUtil.java|  1 +
 .../objectinspector/PhoenixBooleanObjectInspector.java   |  5 +
 .../hive/objectinspector/PhoenixByteObjectInspector.java |  5 +
 .../objectinspector/PhoenixDecimalObjectInspector.java   |  4 ++--
 .../objectinspector/PhoenixDoubleObjectInspector.java|  5 +
 .../objectinspector/PhoenixFloatObjectInspector.java |  5 +
 .../hive/objectinspector/PhoenixIntObjectInspector.java  | 11 +++
 .../hive/objectinspector/PhoenixLongObjectInspector.java |  5 +
 .../objectinspector/PhoenixShortObjectInspector.java |  7 ++-
 9 files changed, 45 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3bb1a2b1/phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java
--
diff --git a/phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java 
b/phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java
index a234d24..3407ffb 100644
--- a/phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java
+++ b/phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTestUtil.java
@@ -567,6 +567,7 @@ public class HiveTestUtil {
 
 public void init() throws Exception {
 testWarehouse = conf.getVar(HiveConf.ConfVars.METASTOREWAREHOUSE);
+conf.setBoolVar(HiveConf.ConfVars.SUBMITLOCALTASKVIACHILD, false);
 String execEngine = conf.get("hive.execution.engine");
 conf.set("hive.execution.engine", "mr");
 SessionState.start(conf);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3bb1a2b1/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
index 0795e14..a767ca0 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixBooleanObjectInspector.java
@@ -34,6 +34,11 @@ public class PhoenixBooleanObjectInspector extends 
AbstractPhoenixObjectInspecto
 }
 
 @Override
+public BooleanWritable getPrimitiveWritableObject(Object o) {
+return new BooleanWritable(get(o));
+}
+
+@Override
 public boolean get(Object o) {
 Boolean value = null;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3bb1a2b1/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixByteObjectInspector.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixByteObjectInspector.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixByteObjectInspector.java
index c6c5e95..a19342a 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixByteObjectInspector.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixByteObjectInspector.java
@@ -37,6 +37,11 @@ public class PhoenixByteObjectInspector extends 
AbstractPhoenixObjectInspectorhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/3bb1a2b1/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
index 388863a..8afe10f 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
@@ -35,7 +35,7 @@ public class PhoenixDecimalObjectInspector extends
 
 @Override
 public Object copyObject(Object

[43/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/PositionBasedResultTuple.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/PositionBasedResultTuple.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/PositionBasedResultTuple.java
new file mode 100644
index 000..109cfc3
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/PositionBasedResultTuple.java
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.schema.tuple;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.util.EncodedColumnsUtil;
+
+public class PositionBasedResultTuple extends BaseTuple {
+private final EncodedColumnQualiferCellsList cells;
+
+public PositionBasedResultTuple(List list) {
+checkArgument(list instanceof EncodedColumnQualiferCellsList, "Invalid 
list type");
+this.cells = (EncodedColumnQualiferCellsList)list;
+}
+
+@Override
+public void getKey(ImmutableBytesWritable ptr) {
+Cell value = cells.getFirstCell();
+ptr.set(value.getRowArray(), value.getRowOffset(), 
value.getRowLength());
+}
+
+@Override
+public boolean isImmutable() {
+return true;
+}
+
+@Override
+public KeyValue getValue(byte[] family, byte[] qualifier) {
+int columnQualifier = 
EncodedColumnsUtil.getEncodedColumnQualifier(qualifier);
+return 
org.apache.hadoop.hbase.KeyValueUtil.ensureKeyValue(cells.getCellForColumnQualifier(columnQualifier));
+}
+
+@Override
+public String toString() {
+  StringBuilder sb = new StringBuilder();
+  sb.append("keyvalues=");
+  if(this.cells == null || this.cells.isEmpty()) {
+sb.append("NONE");
+return sb.toString();
+  }
+  sb.append("{");
+  boolean moreThanOne = false;
+  for(Cell kv : this.cells) {
+if(moreThanOne) {
+  sb.append(", \n");
+} else {
+  moreThanOne = true;
+}
+sb.append(kv.toString()+"/value="+Bytes.toString(kv.getValueArray(), 
+  kv.getValueOffset(), kv.getValueLength()));
+  }
+  sb.append("}\n");
+  return sb.toString();
+}
+
+@Override
+public int size() {
+return cells.size();
+}
+
+@Override
+public KeyValue getValue(int index) {
+return org.apache.hadoop.hbase.KeyValueUtil.ensureKeyValue(index == 0 
? cells.getFirstCell() : cells.get(index));
+}
+
+@Override
+public boolean getValue(byte[] family, byte[] qualifier,
+ImmutableBytesWritable ptr) {
+KeyValue kv = getValue(family, qualifier);
+if (kv == null)
+return false;
+ptr.set(kv.getValueArray(), kv.getValueOffset(), kv.getValueLength());
+return true;
+}
+
+public Iterator getTupleIterator() {
+return new TupleIterator(cells.iterator());
+}
+
+private static class TupleIterator implements Iterator {
+
+private final Iterator delegate;
+private TupleIterator(Iterator delegate) {
+this.delegate = delegate;
+}
+
+@Override
+public boolean hasNext() {
+return delegate.hasNext();
+}
+
+@Override
+public Cell next() {
+return delegate.next();
+}
+
+@Override
+public void remove() {
+delegate.remove();
+}
+
+}
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java 
b/phoenix-core/src/main/java/or

[15/50] [abbrv] phoenix git commit: PHOENIX-3417 Refactor function argument validation with function argument info to separate method(Rajeshbabu)

2016-11-04 Thread samarth
PHOENIX-3417 Refactor function argument validation with function argument info 
to separate method(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/87266ef0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/87266ef0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/87266ef0

Branch: refs/heads/encodecolumns2
Commit: 87266ef07f070129a7dcefe7f214b9e8b07dbf56
Parents: fc3af30
Author: Rajeshbabu Chintaguntla 
Authored: Fri Oct 28 12:29:49 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Oct 28 12:29:49 2016 +0530

--
 .../apache/phoenix/parse/FunctionParseNode.java | 73 +++-
 1 file changed, 40 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/87266ef0/phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
index 0dd021b..952d0d3 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/parse/FunctionParseNode.java
@@ -186,44 +186,51 @@ public class FunctionParseNode extends CompoundParseNode {
 }
 }
 } else {
-if (allowedTypes.length > 0) {
-boolean isCoercible = false;
-for (Class type : allowedTypes) {
-if (child.getDataType().isCoercibleTo(
-
PDataTypeFactory.getInstance().instanceFromClass(type))) {
-isCoercible = true;
-break;
-}
-}
-if (!isCoercible) {
-throw new 
ArgumentTypeMismatchException(args[i].getAllowedTypes(),
-child.getDataType(), info.getName() + " argument " 
+ (i + 1));
-}
-if (child instanceof LiteralExpression) {
-LiteralExpression valueExp = (LiteralExpression) child;
-LiteralExpression minValue = args[i].getMinValue();
-LiteralExpression maxValue = args[i].getMaxValue();
-if (minValue != null && 
minValue.getDataType().compareTo(minValue.getValue(), valueExp.getValue(), 
valueExp.getDataType()) > 0) {
-throw new ValueRangeExcpetion(minValue, maxValue 
== null ? "" : maxValue, valueExp.getValue(), info.getName() + " argument " + 
(i + 1));
-}
-if (maxValue != null && 
maxValue.getDataType().compareTo(maxValue.getValue(), valueExp.getValue(), 
valueExp.getDataType()) < 0) {
-throw new ValueRangeExcpetion(minValue == null ? 
"" : minValue, maxValue, valueExp.getValue(), info.getName() + " argument " + 
(i + 1));
-}
-}
+validateFunctionArguement(info, i, child);
+}
+}
+return children;
+}
+
+public static void validateFunctionArguement(BuiltInFunctionInfo info,
+int childIndex, Expression child)
+throws ArgumentTypeMismatchException, ValueRangeExcpetion {
+BuiltInFunctionArgInfo arg = info.getArgs()[childIndex];
+if (arg.getAllowedTypes().length > 0) {
+boolean isCoercible = false;
+for (Class type :arg.getAllowedTypes()) {
+if (child.getDataType().isCoercibleTo(
+PDataTypeFactory.getInstance().instanceFromClass(type))) {
+isCoercible = true;
+break;
 }
-if (args[i].isConstant() && ! (child instanceof 
LiteralExpression) ) {
-throw new ArgumentTypeMismatchException("constant", 
child.toString(), info.getName() + " argument " + (i + 1));
+}
+if (!isCoercible) {
+throw new ArgumentTypeMismatchException(arg.getAllowedTypes(),
+child.getDataType(), info.getName() + " argument " + 
(childIndex + 1));
+}
+if (child instanceof LiteralExpression) {
+LiteralExpression valueExp = (LiteralExpression) child;
+LiteralExpression minValue = arg.getMinValue();
+LiteralExpression maxValue = arg.getMaxValue();
+if (minValue != null && 
minValue.getDataType().compareTo(minValue.getValue(), valueExp.getValue(), 
valueExp.getDataType()) > 0) {
+throw new ValueRangeExcpetio

[16/50] [abbrv] phoenix git commit: PHOENIX-3374 Wrong data row key is getting generated for local indexes for functions with fixed non null columns(Rajeshbabu)

2016-11-04 Thread samarth
PHOENIX-3374 Wrong data row key is getting generated for local indexes for 
functions with fixed non null columns(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/16e4a181
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/16e4a181
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/16e4a181

Branch: refs/heads/encodecolumns2
Commit: 16e4a181c665f1be63a89263d33731e2e18ce8df
Parents: 87266ef
Author: Rajeshbabu Chintaguntla 
Authored: Fri Oct 28 18:34:05 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Fri Oct 28 18:34:05 2016 +0530

--
 .../phoenix/end2end/index/LocalIndexIT.java | 21 
 .../apache/phoenix/index/IndexMaintainer.java   |  2 +-
 2 files changed, 22 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/16e4a181/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
index bf99db0..4ef98a3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalIndexIT.java
@@ -77,6 +77,27 @@ public class LocalIndexIT extends BaseLocalIndexIT {
 PTable localIndex = conn1.unwrap(PhoenixConnection.class).getTable(new 
PTableKey(null, indexTableName));
 assertEquals(IndexType.LOCAL, localIndex.getIndexType());
 assertNotNull(localIndex.getViewIndexId());
+String tableName2 = "test_table" + generateUniqueName();
+String indexName2 = "idx_test_table" + generateUniqueName();
+String createTable =
+"CREATE TABLE IF NOT EXISTS "
++ tableName2
++ " (user_time UNSIGNED_TIMESTAMP NOT NULL,user_id 
varchar NOT NULL,col1 varchar,col2 double,"
++ "CONSTRAINT pk PRIMARY KEY(user_time,user_id)) 
SALT_BUCKETS = 20";
+conn1.createStatement().execute(createTable);
+conn1.createStatement().execute(
+"CREATE local INDEX IF NOT EXISTS " + indexName2 + " on " + 
tableName2
++ "(HOUR(user_time))");
+conn1.createStatement().execute(
+"upsert into " + tableName2 + " values(TO_TIME('2005-10-01 
14:03:22.559'), 'foo')");
+conn1.commit();
+ResultSet rs =
+conn1.createStatement()
+.executeQuery(
+"select substr(to_char(user_time), 0, 10) as 
ddate, hour(user_time) as hhour, user_id, col1,col2 from "
++ tableName2
++ " where hour(user_time)=14 group by 
user_id, col1, col2, ddate, hhour limit 1");
+assertTrue(rs.next());
 }
 
 @Test

http://git-wip-us.apache.org/repos/asf/phoenix/blob/16e4a181/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
index 6595562..237ed75 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/index/IndexMaintainer.java
@@ -785,7 +785,7 @@ public class IndexMaintainer implements Writable, 
Iterable {
 Integer scaleToBe;
 if (indexField == null) {
 Expression e = expressionItr.next();
-isNullableToBe = true;
+isNullableToBe = e.isNullable();
 dataTypeToBe = 
IndexUtil.getIndexColumnDataType(isNullableToBe, e.getDataType());
 sortOrderToBe = descIndexColumnBitSet.get(i) ? SortOrder.DESC 
: SortOrder.ASC;
 maxLengthToBe = e.getMaxLength();



[18/50] [abbrv] phoenix git commit: PHOENIX-3396 Valid Multi-byte strings whose total byte size is greater than the max char limit cannot be inserted into VARCHAR fields in the PK

2016-11-04 Thread samarth
PHOENIX-3396 Valid Multi-byte strings whose total byte size is greater than the 
max char limit cannot be inserted into VARCHAR fields in the PK


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6ef3a3f0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6ef3a3f0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6ef3a3f0

Branch: refs/heads/encodecolumns2
Commit: 6ef3a3f04597df0404601437e6ba17aa7f4f46e5
Parents: 030fb76
Author: James Taylor 
Authored: Fri Oct 28 08:59:13 2016 -0700
Committer: James Taylor 
Committed: Fri Oct 28 17:23:29 2016 -0700

--
 .../src/test/java/org/apache/phoenix/schema/MutationTest.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6ef3a3f0/phoenix-core/src/test/java/org/apache/phoenix/schema/MutationTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/schema/MutationTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/schema/MutationTest.java
index ce2e22f..e0f48c0 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/schema/MutationTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/schema/MutationTest.java
@@ -83,8 +83,9 @@ public class MutationTest extends BaseConnectionlessQueryTest 
{
 conn.setAutoCommit(false);
 String bvalue = "01234567890123456789";
 assertEquals(20,PVarchar.INSTANCE.toBytes(bvalue).length);
-String value = "澴粖蟤य褻é…
ƒå²¤è±¦íŒ‘薰鄩脼ժ끦碉碉碉碉碉碉";
-assertTrue(value.length() <= maxLength2 && value.getBytes().length 
> maxLength2);
+String value = "澴粖蟤य褻é…
ƒå²¤è±¦íŒ‘薰鄩脼ժ끦碉碉碉碉碉";
+assertTrue(value.length() <= maxLength2);
+assertTrue(PVarchar.INSTANCE.toBytes(value).length > maxLength2);
 conn.createStatement().execute("CREATE TABLE t1 (k1 char(" + 
maxLength1 + ") not null, k2 varchar(" + maxLength2 + "), "
 + "v1 varchar(" + maxLength2 + "), v2 varbinary(" + 
maxLength2 + "), v3 binary(" + maxLength2 + "), constraint pk primary key (k1, 
k2))");
 conn.createStatement().execute("UPSERT INTO t1 VALUES('a','" + 
value + "', '" + value + "','" + bvalue + "','" + bvalue + "')");



[04/50] [abbrv] phoenix git commit: PHOENIX-476 Support declaration of DEFAULT in CREATE statement (Kevin Liew)

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/5ea09210/phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateSQLException.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateSQLException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateSQLException.java
new file mode 100644
index 000..9ed4805
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/DelegateSQLException.java
@@ -0,0 +1,62 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.schema;
+
+import java.sql.SQLException;
+import java.util.Iterator;
+
+public class DelegateSQLException extends SQLException {
+private final SQLException delegate;
+private final String msg;
+
+public DelegateSQLException(SQLException e, String msg) {
+this.delegate = e;
+this.msg = e.getMessage() + msg;
+}
+
+@Override
+public String getMessage() {
+return msg;
+}
+
+@Override
+public String getSQLState() {
+return delegate.getSQLState();
+}
+
+@Override
+public int getErrorCode() {
+return delegate.getErrorCode();
+}
+
+@Override
+public SQLException getNextException() {
+return delegate.getNextException();
+}
+
+@Override
+public void setNextException(SQLException ex) {
+delegate.setNextException(ex);
+}
+
+@Override
+public Iterator iterator() {
+return delegate.iterator();
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5ea09210/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 285c8fa..93fddae 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -1370,6 +1370,9 @@ public class MetaDataClient {
 String columnFamilyName = column.getFamilyName()!=null 
? column.getFamilyName().getString() : null;
 colName = 
ColumnName.caseSensitiveColumnName(IndexUtil.getIndexColumnName(columnFamilyName,
 column.getName().getString()));
 isRowTimestamp = column.isRowTimestamp();
+if (colRef.getColumn().getExpressionStr() != null) {
+expressionStr = 
colRef.getColumn().getExpressionStr();
+}
 }
 else {
 // if this is an expression
@@ -1411,7 +1414,7 @@ public class MetaDataClient {
 if (!SchemaUtil.isPKColumn(col) && col.getViewConstant() 
== null) {
 // Need to re-create ColumnName, since the above one 
won't have the column family name
 colName = 
ColumnName.caseSensitiveColumnName(isLocalIndex?IndexUtil.getLocalIndexColumnFamily(col.getFamilyName().getString()):col.getFamilyName().getString(),
 IndexUtil.getIndexColumnName(col));
-columnDefs.add(FACTORY.columnDef(colName, 
col.getDataType().getSqlTypeName(), col.isNullable(), col.getMaxLength(), 
col.getScale(), false, col.getSortOrder(), null, col.isRowTimestamp()));
+columnDefs.add(FACTORY.columnDef(colName, 
col.getDataType().getSqlTypeName(), col.isNullable(), col.getMaxLength(), 
col.getScale(), false, col.getSortOrder(), col.getExpressionStr(), 
col.isRowTimestamp()));
 }
 }
 
@@ -3651,8 +3654,7 @@ public class MetaDataClient {
 
 public MutationState useSchema(UseSchemaStatement useSchemaStatement) 
throws SQLException {
 // As we allow default namespace mapped to empty schema, so this is to 
reset schema in connection
-if (useSchemaStatement.getSchemaName().equals(StringUtil.EMPTY_STRING)
-|| 
useSchemaStatement.getSchemaName().toUpperCase().equals(Sch

[50/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
Fail-fast iterators for EncodedColumnQualifierCellsList.
Use list iterators instead of get(index) for navigating lists.
Use HBase bytes utility for encoded column names.
Fix test failures for immutable tables and indexes.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ede568e9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ede568e9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ede568e9

Branch: refs/heads/encodecolumns2
Commit: ede568e9c4e4d35e7f4afe19637c8dd7cf5af23c
Parents: 87421ed
Author: Samarth 
Authored: Wed Oct 5 00:11:07 2016 -0700
Committer: Samarth 
Committed: Fri Nov 4 15:12:54 2016 -0700

--
 .../AlterMultiTenantTableWithViewsIT.java   |   25 +-
 .../apache/phoenix/end2end/AlterTableIT.java|  286 +++-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  143 +-
 .../apache/phoenix/end2end/CreateTableIT.java   |5 +
 .../org/apache/phoenix/end2end/OrderByIT.java   |2 -
 .../phoenix/end2end/PhoenixRuntimeIT.java   |4 +-
 .../phoenix/end2end/RowValueConstructorIT.java  |2 +-
 .../phoenix/end2end/StatsCollectorIT.java   |   16 +-
 .../apache/phoenix/end2end/StoreNullsIT.java|   41 +-
 .../apache/phoenix/end2end/UpsertValuesIT.java  |   45 +-
 .../phoenix/end2end/index/DropMetadataIT.java   |   13 +-
 .../end2end/index/IndexExpressionIT.java|   28 +-
 .../apache/phoenix/end2end/index/IndexIT.java   |   26 +-
 .../phoenix/end2end/index/IndexTestUtil.java|   11 +-
 .../end2end/index/MutableIndexFailureIT.java|2 +
 .../phoenix/compile/CreateTableCompiler.java|   15 +-
 .../phoenix/compile/ExpressionCompiler.java |   18 +-
 .../apache/phoenix/compile/FromCompiler.java|   50 +-
 .../apache/phoenix/compile/JoinCompiler.java|8 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/PostDDLCompiler.java |   11 +-
 .../compile/PostLocalIndexDDLCompiler.java  |7 +-
 .../phoenix/compile/ProjectionCompiler.java |   10 +-
 .../apache/phoenix/compile/QueryCompiler.java   |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../compile/TupleProjectionCompiler.java|   21 +-
 .../apache/phoenix/compile/UnionCompiler.java   |5 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |5 +-
 .../apache/phoenix/compile/WhereCompiler.java   |   13 +-
 .../coprocessor/BaseScannerRegionObserver.java  |   40 +-
 .../coprocessor/DelegateRegionScanner.java  |5 +
 .../GroupedAggregateRegionObserver.java |   27 +-
 .../coprocessor/HashJoinRegionScanner.java  |9 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  270 ++--
 .../phoenix/coprocessor/ScanRegionObserver.java |   15 +-
 .../UngroupedAggregateRegionObserver.java   |   16 +-
 .../coprocessor/generated/PTableProtos.java | 1379 --
 .../apache/phoenix/execute/BaseQueryPlan.java   |   25 +-
 .../apache/phoenix/execute/MutationState.java   |   14 +-
 .../apache/phoenix/execute/TupleProjector.java  |6 +-
 .../expression/ArrayColumnExpression.java   |  142 ++
 .../expression/ArrayConstructorExpression.java  |2 +-
 .../phoenix/expression/ExpressionType.java  |5 +-
 .../expression/KeyValueColumnExpression.java|   17 +-
 .../phoenix/expression/LiteralExpression.java   |   11 +-
 .../expression/ProjectedColumnExpression.java   |1 +
 .../visitor/CloneExpressionVisitor.java |6 +
 .../expression/visitor/ExpressionVisitor.java   |2 +
 .../StatelessTraverseAllExpressionVisitor.java  |7 +-
 .../StatelessTraverseNoExpressionVisitor.java   |7 +-
 .../phoenix/filter/ColumnProjectionFilter.java  |   24 +-
 .../filter/MultiKeyValueComparisonFilter.java   |5 +-
 .../SingleCQKeyValueComparisonFilter.java   |3 +-
 .../filter/SingleKeyValueComparisonFilter.java  |4 +-
 .../apache/phoenix/hbase/index/ValueGetter.java |1 +
 .../example/CoveredColumnIndexCodec.java|1 -
 .../hbase/index/util/KeyValueBuilder.java   |1 +
 .../apache/phoenix/index/IndexMaintainer.java   |  327 -
 .../phoenix/index/PhoenixIndexBuilder.java  |2 +-
 .../index/PhoenixIndexFailurePolicy.java|5 +-
 .../index/PhoenixTransactionalIndexer.java  |   16 +-
 .../phoenix/iterate/BaseResultIterators.java|   95 +-
 .../iterate/LookAheadResultIterator.java|2 +-
 .../phoenix/iterate/MappedByteBufferQueue.java  |1 +
 .../phoenix/iterate/OrderedResultIterator.java  |3 +-
 .../iterate/RegionScannerResultIterator.java|   14 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   12 +-
 .../apache/phoenix/jdbc/PhoenixResultSet.java   |2 +-
 .../apache/phoenix/join/HashCacheFactory.java   |1 +
 .../mapreduce/FormatToBytesWritableMapper.java  |   22 +-
 .../mapreduce/FormatToKeyValueReducer.java  |   30 +-
 .../query/Connect

[21/50] [abbrv] phoenix git commit: PHOENIX-3426 Upgrade to Avatica 1.9.0

2016-11-04 Thread samarth
PHOENIX-3426 Upgrade to Avatica 1.9.0

Avatica reworked its shaded artifacts, so we should account
for that downstream. Makes sure that our artifacts are not
leaking classes that we bundle (emphasis on the thin-client jar)

Tweaked some other ancillary properties/deps to make the build
a bit more natural when we have divergent dependency versions.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4b85920e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4b85920e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4b85920e

Branch: refs/heads/encodecolumns2
Commit: 4b85920ef2c407c2275082f6fc69ea8f31b6bf41
Parents: 377ef93
Author: Josh Elser 
Authored: Mon Oct 31 10:43:20 2016 -0400
Committer: Josh Elser 
Committed: Mon Oct 31 11:11:51 2016 -0400

--
 phoenix-core/pom.xml   |  1 +
 phoenix-queryserver-client/pom.xml | 12 +++-
 phoenix-queryserver/pom.xml| 25 -
 pom.xml|  9 ++---
 4 files changed, 38 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b85920e/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 7a3c64a..b01787c 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -311,6 +311,7 @@
   protobuf-java
   ${protobuf-java.version}
 
+
 
   org.apache.httpcomponents
   httpclient

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b85920e/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 6522d4f..8b27237 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -50,6 +50,7 @@
   
 ${project.basedir}/..
 org.apache.phoenix.shaded
+3.1.0
   
 
   
@@ -89,6 +90,7 @@
   NOTICE
   ${project.basedir}/../NOTICE
 
+
   
   
 
@@ -112,6 +114,14 @@
   com.fasterxml
   
${shaded.package}.com.fasterxml
 
+
+  com.google.collect
+  
${shaded.package}.com.google.collect
+
+
+  com.google.protobuf
+  
${shaded.package}.com.google.protobuf
+
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b85920e/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 6340be7..e16257e 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -36,6 +36,7 @@
   
 ${project.basedir}/..
 org.apache.phoenix.shaded
+3.1.0
   
 
   
@@ -79,12 +80,18 @@
   NOTICE
   ${project.basedir}/../NOTICE
 
+
   
   
 
   org.apache.calcite.avatica:*
   org.eclipse.jetty:*
   javax.servlet:*
+  org.apache.httpcomponents:*
+  commons-codec:*
+  commons-logging:*
+  com.google.protobuf:*
+  com.fasterxml.jackson.core:*
 
   
   
@@ -105,6 +112,22 @@
   org.eclipse.jetty
   
${shaded.package}.org.eclipse.jetty
 
+
+  com.google.protobuf
+  
${shaded.package}.com.google.protobuf
+
+
+  com.fasterxml.jackson
+  
${shaded.package}.com.fasterxml.jackson
+
+
+  org.apache.commons
+  
${shaded.package}.org.apache.commons
+
+
+  org.apache.http
+  
${shaded.package}.org.apache.http
+
 
@@ -123,7 +146,7 @@
 
 
   org.apache.calcite.avatica
-  avatica
+  avatica-core
 
 
   org.apache.calcite.avatica

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4b85920e/pom.xml
--
diff --git a/pom.xml b/pom.xml
index f7db2d7..d39d822 100644
--- a/pom.xml
+++ b/pom.xml
@@ -96,7 +96,7 @@
 
 1.6
 2.1.2
-1.8.0
+1.9.0
 8.1.7.v20120910
 0.9.0-incubating
 1.6.1
@@ -710,7 +710,7 @@
   
   
 org.apache.calcite.avatica
- 

[23/50] [abbrv] phoenix git commit: PHOENIX-3428 Fix flapping tests in UpgradeIT

2016-11-04 Thread samarth
PHOENIX-3428 Fix flapping tests in UpgradeIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e63f6d67
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e63f6d67
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e63f6d67

Branch: refs/heads/encodecolumns2
Commit: e63f6d672e30c2c36185cb0ac226b0a6195f2dc2
Parents: 29c2c0a
Author: Samarth 
Authored: Mon Oct 31 11:31:36 2016 -0700
Committer: Samarth 
Committed: Mon Oct 31 11:31:36 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/UpgradeIT.java  | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e63f6d67/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
index d377bd2..0e5f9f2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
@@ -72,8 +72,6 @@ import org.junit.Test;
 public class UpgradeIT extends ParallelStatsDisabledIT {
 
 private String tenantId;
-private static final byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
-PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
 
 @Before
 public void generateTenantId() {
@@ -699,6 +697,8 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
 @Test
 public void testAcquiringAndReleasingUpgradeMutex() throws Exception {
 ConnectionQueryServices services = null;
+byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+generateUniqueName());
 try (Connection conn = getConnection(false, null)) {
 services = conn.unwrap(PhoenixConnection.class).getQueryServices();
 assertTrue(((ConnectionQueryServicesImpl)services)
@@ -724,8 +724,9 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
 ConnectionQueryServices services = null;
 try (Connection conn = getConnection(false, null)) {
 services = conn.unwrap(PhoenixConnection.class).getQueryServices();
-FutureTask task1 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus1, services, latch, numExceptions));
-FutureTask task2 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus2, services, latch, numExceptions));
+final byte[] mutexKey = Bytes.toBytes(generateUniqueName());
+FutureTask task1 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus1, services, latch, numExceptions, mutexKey));
+FutureTask task2 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus2, services, latch, numExceptions, mutexKey));
 Thread t1 = new Thread(task1);
 t1.setDaemon(true);
 Thread t2 = new Thread(task2);
@@ -760,11 +761,13 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
 private final ConnectionQueryServices services;
 private final CountDownLatch latch;
 private final AtomicInteger numExceptions;
-public AcquireMutexRunnable(AtomicBoolean acquireStatus, 
ConnectionQueryServices services, CountDownLatch latch, AtomicInteger 
numExceptions) {
+private final byte[] mutexRowKey;
+public AcquireMutexRunnable(AtomicBoolean acquireStatus, 
ConnectionQueryServices services, CountDownLatch latch, AtomicInteger 
numExceptions, byte[] mutexKey) {
 this.acquireStatus = acquireStatus;
 this.services = services;
 this.latch = latch;
 this.numExceptions = numExceptions;
+this.mutexRowKey = mutexKey;
 }
 @Override
 public Void call() throws Exception {



[32/50] [abbrv] phoenix git commit: PHOENIX-3422 PhoenixQueryBuilder doesn't make value string correctly for char(/varchar) column type.

2016-11-04 Thread samarth
PHOENIX-3422 PhoenixQueryBuilder doesn't make value string correctly for 
char(/varchar) column type.

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a225f5ff
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a225f5ff
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a225f5ff

Branch: refs/heads/encodecolumns2
Commit: a225f5ffe773dde7a7efc1ada1d6dbda9d667cdf
Parents: cf70820
Author: Jeongdae Kim 
Authored: Fri Oct 28 17:13:23 2016 +0900
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:58:46 2016 -0700

--
 phoenix-hive/pom.xml|   7 +-
 .../phoenix/hive/query/PhoenixQueryBuilder.java | 129 ++-
 .../hive/util/PhoenixStorageHandlerUtil.java|   4 +-
 .../hive/query/PhoenixQueryBuilderTest.java |  87 +
 4 files changed, 163 insertions(+), 64 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a225f5ff/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 250db49..c36e737 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -110,7 +110,12 @@
   hadoop-minicluster
   test
 
-
+
+  org.mockito
+  mockito-all
+  ${mockito-all.version}
+  test
+
   
 
   

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a225f5ff/phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
index 8e3a972..a38814d 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/query/PhoenixQueryBuilder.java
@@ -19,7 +19,9 @@ package org.apache.phoenix.hive.query;
 
 import com.google.common.base.CharMatcher;
 import com.google.common.base.Joiner;
+import com.google.common.base.Predicate;
 import com.google.common.base.Splitter;
+import com.google.common.collect.Iterables;
 import com.google.common.collect.Lists;
 import org.apache.commons.lang.StringUtils;
 import org.apache.commons.logging.Log;
@@ -31,12 +33,9 @@ import org.apache.phoenix.hive.ql.index.IndexSearchCondition;
 import org.apache.phoenix.hive.util.PhoenixStorageHandlerUtil;
 import org.apache.phoenix.hive.util.PhoenixUtil;
 
+import javax.annotation.Nullable;
 import java.io.IOException;
-import java.util.Arrays;
-import java.util.Collections;
-import java.util.Iterator;
-import java.util.List;
-import java.util.Map;
+import java.util.*;
 import java.util.regex.Matcher;
 import java.util.regex.Pattern;
 
@@ -662,17 +661,17 @@ public class PhoenixQueryBuilder {
 comparisonOp);
 
 if (comparisonOp.endsWith("UDFOPEqual")) {// column = 1
-appendCondition(sql, " = ", typeName, constantValues[0]);
+sql.append(" = ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("UDFOPEqualOrGreaterThan")) {
// column >= 1
-appendCondition(sql, " >= ", typeName, constantValues[0]);
+sql.append(" >= ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("UDFOPGreaterThan")) {// 
column > 1
-appendCondition(sql, " > ", typeName, constantValues[0]);
+sql.append(" > ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("UDFOPEqualOrLessThan")) {// 
column <= 1
-appendCondition(sql, " <= ", typeName, constantValues[0]);
+sql.append(" <= ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("UDFOPLessThan")) {// column 
< 1
-appendCondition(sql, " < ", typeName, constantValues[0]);
+sql.append(" < ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("UDFOPNotEqual")) {// column 
!= 1
-appendCondition(sql, " != ", typeName, constantValues[0]);
+sql.append(" != ").append(createConstantString(typeName, 
constantValues[0]));
 } else if (comparisonOp.endsWith("GenericUDFBetween")) {
 appendBetweenCondition(jobConf, sql, condition.isNot(), 
typeName, constantValues);
 } else if (comparisonOp.endsWith("GenericUDFIn")) {
@@ -687,44 +686,16 @@ public class PhoenixQueryBuilder {
 return condi

[10/50] [abbrv] phoenix git commit: PHOENIX-6 Support ON DUPLICATE KEY construct

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/e2325a41/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
index affa778..e4a64e3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
@@ -17,20 +17,76 @@
  */
 package org.apache.phoenix.index;
 
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.DataInputStream;
+import java.io.DataOutputStream;
+import java.io.EOFException;
 import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
 import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue.Type;
+import org.apache.hadoop.hbase.client.Delete;
+import org.apache.hadoop.hbase.client.Get;
+import org.apache.hadoop.hbase.client.Increment;
 import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.hadoop.io.WritableUtils;
+import org.apache.phoenix.coprocessor.generated.PTableProtos;
+import org.apache.phoenix.exception.DataExceedsCapacityException;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.ExpressionType;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import 
org.apache.phoenix.expression.visitor.StatelessTraverseAllExpressionVisitor;
 import org.apache.phoenix.hbase.index.covered.IndexMetaData;
 import org.apache.phoenix.hbase.index.covered.NonTxIndexBuilder;
+import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.write.IndexWriter;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PRow;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableImpl;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.TrustedByteArrayOutputStream;
+
+import com.google.common.collect.Lists;
 
 /**
  * Index builder for covered-columns index that ties into phoenix for faster 
use.
  */
 public class PhoenixIndexBuilder extends NonTxIndexBuilder {
+public static final String ATOMIC_OP_ATTRIB = "_ATOMIC_OP_ATTRIB";
+private static final byte[] ON_DUP_KEY_IGNORE_BYTES = new byte[] {1}; // 
boolean true
+private static final int ON_DUP_KEY_HEADER_BYTE_SIZE = Bytes.SIZEOF_SHORT 
+ Bytes.SIZEOF_BOOLEAN;
+
 
+private static List flattenCells(Mutation m, int estimatedSize) 
throws IOException {
+List flattenedCells = 
Lists.newArrayListWithExpectedSize(estimatedSize);
+flattenCells(m, flattenedCells);
+return flattenedCells;
+}
+
+private static void flattenCells(Mutation m, List flattenedCells) 
throws IOException {
+for (List cells : m.getFamilyCellMap().values()) {
+flattenedCells.addAll(cells);
+}
+}
+
 @Override
 public IndexMetaData 
getIndexMetaData(MiniBatchOperationInProgress miniBatchOp) throws 
IOException {
 return new PhoenixIndexMetaData(env, 
miniBatchOp.getOperation(0).getAttributesMap());
@@ -53,4 +109,266 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder 
{
 @Override
 public void batchStarted(MiniBatchOperationInProgress 
miniBatchOp, IndexMetaData context) throws IOException {
 }
+
+@Override
+public boolean isAtomicOp(Mutation m) throws IOException {
+return m.getAttribute(ATOMIC_OP_ATTRIB) != null;
+}
+
+private static void transferCells(Mutation source, Mutation target) {
+target.getFamilyCellMap().putAll(source.getFamilyCellMap());
+}
+private static void transferAttributes(Mutation source, Mutation target) {
+for (Map.Entry entry : 
source.getAttributesMap().entrySet()) {
+target.setAttribute(entry.getKey(), entry.getValue());
+}
+}
+private static List 
convertIncrementToPutInSingletonList(Increment inc) {
+byte[] rowKey = inc.getRow();
+Put put = new Put(rowKey);
+transferCells(inc, put);
+transferAttributes

[36/50] [abbrv] phoenix git commit: PHOENIX-3434 Avoid creating new Configuration in ClientAggregatePlan to improve performance

2016-11-04 Thread samarth
PHOENIX-3434 Avoid creating new Configuration in ClientAggregatePlan to improve 
performance


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dcebfc2d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dcebfc2d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dcebfc2d

Branch: refs/heads/encodecolumns2
Commit: dcebfc2dd60ab31ea4f4812fb62a1e9897f64883
Parents: ecb9360
Author: James Taylor 
Authored: Wed Nov 2 11:42:41 2016 -0700
Committer: James Taylor 
Committed: Wed Nov 2 13:24:49 2016 -0700

--
 .../apache/phoenix/execute/ClientAggregatePlan.java   | 14 +-
 .../phoenix/expression/aggregator/Aggregators.java|  5 -
 .../expression/aggregator/ServerAggregators.java  |  2 --
 .../expression/function/SingleAggregateFunction.java  |  6 +++---
 .../apache/phoenix/query/ConnectionQueryServices.java |  3 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java|  5 +
 .../query/ConnectionlessQueryServicesImpl.java|  8 +++-
 .../query/DelegateConnectionQueryServices.java|  6 ++
 8 files changed, 37 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcebfc2d/phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
index 9251724..8ef1f8d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/ClientAggregatePlan.java
@@ -38,6 +38,7 @@ import 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.expression.OrderByExpression;
 import org.apache.phoenix.expression.aggregator.Aggregators;
+import org.apache.phoenix.expression.aggregator.ClientAggregators;
 import org.apache.phoenix.expression.aggregator.ServerAggregators;
 import org.apache.phoenix.iterate.AggregatingResultIterator;
 import org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator;
@@ -68,18 +69,21 @@ import com.google.common.collect.Lists;
 public class ClientAggregatePlan extends ClientProcessingPlan {
 private final GroupBy groupBy;
 private final Expression having;
-private final Aggregators serverAggregators;
-private final Aggregators clientAggregators;
+private final ServerAggregators serverAggregators;
+private final ClientAggregators clientAggregators;
 
 public ClientAggregatePlan(StatementContext context, FilterableStatement 
statement, TableRef table, RowProjector projector,
 Integer limit, Integer offset, Expression where, OrderBy orderBy, 
GroupBy groupBy, Expression having, QueryPlan delegate) {
 super(context, statement, table, projector, limit, offset, where, 
orderBy, delegate);
 this.groupBy = groupBy;
 this.having = having;
-this.serverAggregators =
-ServerAggregators.deserialize(context.getScan()
-.getAttribute(BaseScannerRegionObserver.AGGREGATORS), 
QueryServicesOptions.withDefaults().getConfiguration());
 this.clientAggregators = 
context.getAggregationManager().getAggregators();
+// We must deserialize rather than clone based off of client 
aggregators because
+// upon deserialization we create the server-side aggregators instead 
of the client-side
+// aggregators. We use the Configuration directly here to avoid the 
expense of creating
+// another one.
+this.serverAggregators = 
ServerAggregators.deserialize(context.getScan()
+.getAttribute(BaseScannerRegionObserver.AGGREGATORS), 
context.getConnection().getQueryServices().getConfiguration());
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/phoenix/blob/dcebfc2d/phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
index cf77c8e..b1dc658 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/aggregator/Aggregators.java
@@ -18,7 +18,6 @@
 package org.apache.phoenix.expression.aggregator;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
-
 import org.apache.phoenix.expression.function.SingleAggregateFunction;
 import 

[46/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
index 00ece40..15a9f74 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
@@ -26,6 +26,7 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
 import org.apache.phoenix.expression.Expression;
@@ -80,6 +81,11 @@ public abstract class CloneExpressionVisitor extends 
TraverseAllExpressionVisito
 public Expression visit(KeyValueColumnExpression node) {
 return node;
 }
+
+@Override
+public Expression visit(ArrayColumnExpression node) {
+return node;
+}
 
 @Override
 public Expression visit(ProjectedColumnExpression node) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
index 31f340d..100f099 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
@@ -27,6 +27,7 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
 import org.apache.phoenix.expression.Expression;
@@ -113,6 +114,7 @@ public interface ExpressionVisitor {
 public E visit(LiteralExpression node);
 public E visit(RowKeyColumnExpression node);
 public E visit(KeyValueColumnExpression node);
+public E visit(ArrayColumnExpression node);
 public E visit(ProjectedColumnExpression node);
 public E visit(SequenceValueExpression node);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
index 3b7067a..9e50bc4 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
@@ -26,9 +26,9 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
-import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.expression.InListExpression;
 import org.apache.phoenix.expression.IsNullExpression;
 import org.apache.phoenix.expression.KeyValueColumnExpression;
@@ -121,6 +121,11 @@ public class StatelessTraverseAllExpressionVisitor 
extends TraverseAllExpress
 }
 
 @Override
+public E visit(ArrayColumnExpression node) {
+return null;
+}
+
+@Override
 public E visit(ProjectedColumnExpression node) {
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseNoExpressionVisitor.java
--
di

[02/50] [abbrv] phoenix git commit: PHOENIX-3370 VIEW derived from another VIEW with WHERE on a TABLE doesn't use parent VIEW indexes

2016-11-04 Thread samarth
PHOENIX-3370 VIEW derived from another VIEW with WHERE on a TABLE doesn't use 
parent VIEW indexes


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9b851d5c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9b851d5c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9b851d5c

Branch: refs/heads/encodecolumns2
Commit: 9b851d5c605c0fdcb8ce89ed4da09fe78fd79023
Parents: 2699265
Author: James Taylor 
Authored: Wed Oct 12 16:11:26 2016 -0700
Committer: James Taylor 
Committed: Tue Oct 25 16:41:01 2016 -0700

--
 .../phoenix/compile/QueryCompilerTest.java  | 64 +++-
 1 file changed, 63 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b851d5c/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
index 7697d8c..2439ac9 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryCompilerTest.java
@@ -457,6 +457,29 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 return plan.getContext().getScan();
 }
 
+private QueryPlan getQueryPlan(String query) throws SQLException {
+return getQueryPlan(query, Collections.emptyList());
+}
+
+private QueryPlan getOptimizedQueryPlan(String query) throws SQLException {
+return getOptimizedQueryPlan(query, Collections.emptyList());
+}
+
+private QueryPlan getOptimizedQueryPlan(String query, List binds) 
throws SQLException {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+try {
+PhoenixPreparedStatement statement = 
conn.prepareStatement(query).unwrap(PhoenixPreparedStatement.class);
+for (Object bind : binds) {
+statement.setObject(1, bind);
+}
+QueryPlan plan = statement.optimizeQuery(query);
+return plan;
+} finally {
+conn.close();
+}
+}
+
 private QueryPlan getQueryPlan(String query, List binds) throws 
SQLException {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
@@ -2263,7 +2286,7 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 try (Connection conn = DriverManager.getConnection(getUrl(), props);) {
 try {
 conn.createStatement().execute(
-"CREATE TABLE t (k VARCHAR NOT NULL PRIMARY 
KEY, v1 VARCHAR) GUIDE_POST_WIDTH = -1");
+"CREATE TABLE t (k VARCHAR NOT NULL PRIMARY KEY, v1 
VARCHAR) GUIDE_POSTS_WIDTH = -1");
 fail();
 } catch (SQLException e) {
 assertEquals("Unexpected Exception",
@@ -2443,4 +2466,43 @@ public class QueryCompilerTest extends 
BaseConnectionlessQueryTest {
 conn.close();
 }
 }
+
+@Test
+public void testIndexOnViewWithChildView() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute("CREATE TABLE 
PLATFORM_ENTITY.GLOBAL_TABLE (\n" + 
+"ORGANIZATION_ID CHAR(15) NOT NULL,\n" + 
+"KEY_PREFIX CHAR(3) NOT NULL,\n" + 
+"CREATED_DATE DATE,\n" + 
+"CREATED_BY CHAR(15),\n" + 
+"CONSTRAINT PK PRIMARY KEY (\n" + 
+"ORGANIZATION_ID,\n" + 
+"KEY_PREFIX\n" + 
+")\n" + 
+") VERSIONS=1, IMMUTABLE_ROWS=true, MULTI_TENANT=true");
+conn.createStatement().execute("CREATE VIEW 
PLATFORM_ENTITY.GLOBAL_VIEW  (\n" + 
+"INT1 BIGINT NOT NULL,\n" + 
+"DOUBLE1 DECIMAL(12, 3),\n" + 
+"IS_BOOLEAN BOOLEAN,\n" + 
+"TEXT1 VARCHAR,\n" + 
+"CONSTRAINT PKVIEW PRIMARY KEY\n" + 
+"(\n" + 
+"INT1\n" + 
+")\n" + 
+")\n" + 
+"AS SELECT * FROM PLATFORM_ENTITY.GLOBAL_TABLE WHERE 
KEY_PREFIX = '123'");
+conn.createStatement().execute("CREATE INDEX GLOBAL_INDEX\n" + 
+"ON PLATFORM_ENTITY.GLOBAL_VIEW (TEXT1 DESC, IN

[49/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 8e4d9aa..f5df980 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -71,6 +71,7 @@ import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.IndexType;
+import org.apache.phoenix.schema.PTable.StorageScheme;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
@@ -125,10 +126,12 @@ public class FromCompiler {
 throw new ColumnNotFoundException(schemaName, tableName, null, 
colName);
 }
 
+@Override
 public PFunction resolveFunction(String functionName) throws 
SQLException {
 throw new FunctionNotFoundException(functionName);
 }
 
+@Override
 public boolean hasUDFs() {
 return false;
 }
@@ -185,7 +188,7 @@ public class FromCompiler {
 if (htable != null) Closeables.closeQuietly(htable);
 }
 tableNode = NamedTableNode.create(null, baseTable, 
statement.getColumnDefs());
-return new SingleTableColumnResolver(connection, tableNode, 
e.getTimeStamp(), new HashMap(1), false);
+return new SingleTableColumnResolver(connection, tableNode, 
e.getTimeStamp(), new HashMap(1), false, false);
 }
 throw e;
 }
@@ -257,7 +260,7 @@ public class FromCompiler {
 Expression sourceExpression = 
projector.getColumnProjector(column.getPosition()).getExpression();
 PColumnImpl projectedColumn = new PColumnImpl(column.getName(), 
column.getFamilyName(),
 sourceExpression.getDataType(), 
sourceExpression.getMaxLength(), sourceExpression.getScale(), 
sourceExpression.isNullable(),
-column.getPosition(), sourceExpression.getSortOrder(), 
column.getArraySize(), column.getViewConstant(), column.isViewReferenced(), 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic());
+column.getPosition(), sourceExpression.getSortOrder(), 
column.getArraySize(), column.getViewConstant(), column.isViewReferenced(), 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getEncodedColumnQualifier());
 projectedColumns.add(projectedColumn);
 }
 PTable t = PTableImpl.makePTable(table, projectedColumns);
@@ -332,26 +335,26 @@ public class FromCompiler {
private final String alias;
 private final List schemas;
 
-   public SingleTableColumnResolver(PhoenixConnection connection, 
NamedTableNode table, long timeStamp, Map udfParseNodes, 
boolean isNamespaceMapped) throws SQLException  {
-   super(connection, 0, false, udfParseNodes);
-   List families = 
Lists.newArrayListWithExpectedSize(table.getDynamicColumns().size());
-   for (ColumnDef def : table.getDynamicColumns()) {
-   if (def.getColumnDefName().getFamilyName() != null) {
-   families.add(new 
PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.emptyList()));
-   }
+public SingleTableColumnResolver(PhoenixConnection connection, 
NamedTableNode table, long timeStamp, Map udfParseNodes, 
boolean isNamespaceMapped, boolean useEncodedColumnNames) throws SQLException {
+super(connection, 0, false, udfParseNodes);
+List families = 
Lists.newArrayListWithExpectedSize(table.getDynamicColumns().size());
+for (ColumnDef def : table.getDynamicColumns()) {
+if (def.getColumnDefName().getFamilyName() != null) {
+families.add(new 
PColumnFamilyImpl(PNameFactory.newName(def.getColumnDefName().getFamilyName()),Collections.emptyList(),
 useEncodedColumnNames));
+}
 }
 Long scn = connection.getSCN();
 String schema = table.getName().getSchemaName();
 if (connection.getSchema() != null) {
 schema = schema != null ? schema : connection.getSchema();
 }
-   PTable theTable = new PTableImpl(connection.getTenantId(), schema, 
table.getName().getTableName(),
+PTable theTable = new PTableImpl(connection.getTenantId(), schema, 
table.getName().getTableName(),
 scn == null ? HConstants.LATEST_TIMESTAMP : scn, families, 
isNamespaceMapped);
-   theTable = 

[13/50] [abbrv] phoenix git commit: PHOENIX-3396 Valid Multi-byte strings whose total byte size is greater than the max char limit cannot be inserted into VARCHAR fields in the PK

2016-11-04 Thread samarth
PHOENIX-3396 Valid Multi-byte strings whose total byte size is greater than the 
max char limit cannot be inserted into VARCHAR fields in the PK


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bb88e9f5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bb88e9f5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bb88e9f5

Branch: refs/heads/encodecolumns2
Commit: bb88e9f59f6f1549defa0f9911c46ecc28b8d63e
Parents: e2325a4
Author: James Taylor 
Authored: Thu Oct 27 20:31:42 2016 -0700
Committer: James Taylor 
Committed: Thu Oct 27 23:13:07 2016 -0700

--
 .../phoenix/end2end/ArithmeticQueryIT.java  |  11 +-
 .../apache/phoenix/end2end/UpsertSelectIT.java  |  56 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   8 +-
 .../UngroupedAggregateRegionObserver.java   | 369 +-
 .../exception/DataExceedsCapacityException.java |  14 +-
 .../phoenix/exception/SQLExceptionInfo.java |   9 +-
 .../function/ArrayConcatFunction.java   |   5 +-
 .../function/ArrayModifierFunction.java |   8 +-
 .../phoenix/index/PhoenixIndexBuilder.java  |   4 +-
 .../org/apache/phoenix/parse/ColumnDef.java |   4 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  47 +-
 .../phoenix/schema/types/PArrayDataType.java|  11 +-
 .../apache/phoenix/schema/types/PBinary.java| 340 +-
 .../phoenix/schema/types/PBinaryBase.java   |  17 +
 .../org/apache/phoenix/schema/types/PChar.java  |  15 +-
 .../apache/phoenix/schema/types/PDataType.java  |   5 +-
 .../apache/phoenix/schema/types/PDecimal.java   | 669 ++-
 .../apache/phoenix/schema/types/PVarbinary.java | 248 ---
 .../apache/phoenix/schema/types/PVarchar.java   | 268 
 .../org/apache/phoenix/util/SchemaUtil.java |  11 +-
 .../org/apache/phoenix/schema/MutationTest.java |  54 ++
 21 files changed, 1154 insertions(+), 1019 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb88e9f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java
index fd19d8a..6fad0f0 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java
@@ -225,11 +225,16 @@ public class ArithmeticQueryIT extends 
ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(new BigDecimal("100.3"), rs.getBigDecimal(1));
 assertFalse(rs.next());
-// source and target in same table, values scheme incompatible.
+// source and target in same table, values scheme incompatible. 
should throw
 query = "UPSERT INTO " + source + "(pk, col4) SELECT pk, col1 from 
" + source;
 stmt = conn.prepareStatement(query);
-stmt.execute();
-conn.commit();
+try {
+stmt.execute();
+conn.commit();
+fail();
+} catch (SQLException e) {
+
assertEquals(SQLExceptionCode.DATA_EXCEEDS_MAX_CAPACITY.getErrorCode(), 
e.getErrorCode());
+}
 query = "SELECT col4 FROM " + source;
 stmt = conn.prepareStatement(query);
 rs = stmt.executeQuery();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb88e9f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
index 3561274..763f11b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.UPSERT_BATCH_SIZE_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.ATABLE_NAME;
 import static org.apache.phoenix.util.TestUtil.A_VALUE;
 import static org.apache.phoenix.util.TestUtil.B_VALUE;
 import static org.apache.phoenix.util.TestUtil.CUSTOM_ENTITY_DATA_FULL_NAME;
@@ -29,7 +30,6 @@ import static org.apache.phoenix.util.TestUtil.ROW7;
 import static org.apache.phoenix.util.TestUtil.ROW8;
 import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.ATABL

[09/50] [abbrv] phoenix git commit: PHOENIX-3267 Replace use of SELECT null with CAST(null AS ) (Eric Lomore)

2016-11-04 Thread samarth
PHOENIX-3267 Replace use of SELECT null with CAST(null AS ) (Eric Lomore)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/70979abf
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/70979abf
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/70979abf

Branch: refs/heads/encodecolumns2
Commit: 70979abf0e7e0d9b1199f435cb4d1cb1daf73d5a
Parents: d7aea49
Author: James Taylor 
Authored: Thu Oct 27 11:48:02 2016 -0700
Committer: James Taylor 
Committed: Thu Oct 27 14:00:40 2016 -0700

--
 .../src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java | 2 +-
 .../src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java   | 2 +-
 .../it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java | 2 +-
 .../org/apache/phoenix/jdbc/PhoenixResultSetMetadataTest.java| 4 ++--
 4 files changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/70979abf/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
index 01cc2c1..c689373 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateQueryIT.java
@@ -77,7 +77,7 @@ public class AggregateQueryIT extends BaseQueryIT {
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 70));
 conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("UPSERT INTO 
atable(organization_id,entity_id,a_integer) SELECT organization_id, entity_id, 
null FROM atable");
+conn.createStatement().execute("UPSERT INTO 
atable(organization_id,entity_id,a_integer) SELECT organization_id, entity_id, 
CAST(null AS integer) FROM atable");
 
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 90));
 conn = DriverManager.getConnection(getUrl(), props);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/70979abf/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
index 8c9c8eb..3561274 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java
@@ -681,7 +681,7 @@ public class UpsertSelectIT extends BaseClientManagedTimeIT 
{
 
 props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, Long.toString(ts 
+ 30));
 conn = DriverManager.getConnection(getUrl(), props);
-conn.createStatement().execute("upsert into phoenix_test (id, ts) 
select id, null from phoenix_test where id <= 'bbb' limit 1");
+conn.createStatement().execute("upsert into phoenix_test (id, ts) 
select id, CAST(null AS timestamp) from phoenix_test where id <= 'bbb' limit 
1");
 conn.commit();
 conn.close();
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/70979abf/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index e319023..499f58c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -155,7 +155,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 assertFalse(rs.next());
 
 stmt = conn.prepareStatement("UPSERT INTO " + fullTableName
-+ "(varchar_pk, char_pk, int_pk, long_pk , decimal_pk, 
long_col2) SELECT varchar_pk, char_pk, int_pk, long_pk , decimal_pk, null FROM "
++ "(varchar_pk, char_pk, int_pk, long_pk , decimal_pk, 
long_col2) SELECT varchar_pk, char_pk, int_pk, long_pk , decimal_pk, CAST(null 
AS BIGINT) FROM "
 + fullTableName + " WHERE long_col2=?");
 stmt.setLong(1,3L);
 assertEquals(1,stmt.executeUpdate());

http://git-wip-us.apache.org/repos/asf/phoenix/blob/70979abf/phoenix-core/src/test/java/org/apache/phoenix/jdbc/PhoenixResultSetMetadataTest.java
--
diff --git 
a/phoenix-

[27/50] [abbrv] phoenix git commit: PHOENIX-3432 Upgrade Phoenix 4.8.0 to 4.9.0 fails because of illegal characters in snapshot name

2016-11-04 Thread samarth
PHOENIX-3432 Upgrade Phoenix 4.8.0 to 4.9.0 fails because of illegal characters 
in snapshot name


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1ed90b6a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1ed90b6a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1ed90b6a

Branch: refs/heads/encodecolumns2
Commit: 1ed90b6a2b48013923f5a84f7b4d7c759825e82d
Parents: c5fed78
Author: Samarth 
Authored: Wed Nov 2 10:49:18 2016 -0700
Committer: Samarth 
Committed: Wed Nov 2 10:49:49 2016 -0700

--
 .../phoenix/query/ConnectionQueryServicesImpl.java   | 11 +++
 .../main/java/org/apache/phoenix/util/UpgradeUtil.java   |  5 +++--
 2 files changed, 10 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1ed90b6a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index ff4e404..b1b7bab 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -33,7 +33,7 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THREAD_POOL_SIZE;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RENEW_LEASE_THRESHOLD_MILLISECONDS;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_RUN_RENEW_LEASE_FREQUENCY_INTERVAL_MILLISECONDS;
-import static org.apache.phoenix.util.UpgradeUtil.getUpgradeSnapshotName;
+import static org.apache.phoenix.util.UpgradeUtil.getSysCatalogSnapshotName;
 import static org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
 
 import java.io.IOException;
@@ -2493,6 +2493,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 boolean acquiredMutexLock = false;
 byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
 PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
+boolean snapshotCreated = false;
 try {
 if (!ConnectionQueryServicesImpl.this.upgradeRequired.get()) {
 throw new UpgradeNotRequiredException();
@@ -2516,9 +2517,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 sysCatalogTableName = 
e.getTable().getPhysicalName().getString();
 if (currentServerSideTableTimeStamp < 
MIN_SYSTEM_TABLE_TIMESTAMP
 && (acquiredMutexLock = 
acquireUpgradeMutex(currentServerSideTableTimeStamp, mutexRowKey))) {
-snapshotName = getUpgradeSnapshotName(sysCatalogTableName,
-currentServerSideTableTimeStamp);
+snapshotName = 
getSysCatalogSnapshotName(currentServerSideTableTimeStamp);
 createSnapshot(snapshotName, sysCatalogTableName);
+snapshotCreated = true;
 }
 String columnsToAdd = "";
 // This will occur if we have an older SYSTEM.CATALOG and we 
need to update it to
@@ -2810,7 +2811,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 } finally {
 try {
-restoreFromSnapshot(sysCatalogTableName, snapshotName, 
success);
+if (snapshotCreated) {
+restoreFromSnapshot(sysCatalogTableName, snapshotName, 
success);
+}
 } catch (SQLException e) {
 if (toThrow != null) {
 toThrow.setNextException(e);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1ed90b6a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index df283c5..2b04ac1 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -1899,8 +1899,9 @@ public class UpgradeUtil {
 }
 }
 
-public static final String getUpgradeSnapshotName(String tableString, long 
currentSystemTableTimestamp) {
-Format formatter = new SimpleDateFormat("MMddHHmmssZ");
+public static 

[48/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 9a7b9e3..93a87ea 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -27,6 +27,7 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.CLASS_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_COUNT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_DEF_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME_INDEX;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_QUALIFIER_COUNTER_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_SIZE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TABLE_NAME_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DATA_TYPE_BYTES;
@@ -34,6 +35,7 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DECIMAL_DIGITS_BYT
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DEFAULT_COLUMN_FAMILY_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DEFAULT_VALUE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.DISABLE_WAL_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ENCODED_COLUMN_QUALIFIER_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.FAMILY_NAME_INDEX;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.IMMUTABLE_ROWS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.INDEX_DISABLE_TIMESTAMP_BYTES;
@@ -57,6 +59,7 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SCHEMA_NAME_INDEX;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SORT_ORDER_BYTES;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.STORAGE_SCHEME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.STORE_NULLS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_FAMILY_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME_INDEX;
@@ -74,11 +77,11 @@ import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_
 import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
 import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static 
org.apache.phoenix.util.EncodedColumnsUtil.getEncodedColumnQualifier;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
 import java.io.IOException;
-import java.sql.DriverManager;
 import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.sql.Statement;
@@ -150,14 +153,12 @@ import 
org.apache.phoenix.coprocessor.generated.MetaDataProtos.GetVersionRequest
 import 
org.apache.phoenix.coprocessor.generated.MetaDataProtos.GetVersionResponse;
 import 
org.apache.phoenix.coprocessor.generated.MetaDataProtos.MetaDataResponse;
 import 
org.apache.phoenix.coprocessor.generated.MetaDataProtos.UpdateIndexStateRequest;
-import org.apache.phoenix.expression.ColumnExpression;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.expression.KeyValueColumnExpression;
 import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.ProjectedColumnExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
 import 
org.apache.phoenix.expression.visitor.StatelessTraverseAllExpressionVisitor;
-import org.apache.phoenix.hbase.index.covered.update.ColumnReference;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.KeyValueBuilder;
@@ -190,8 +191,10 @@ import org.apache.phoenix.schema.PMetaDataEntity;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTable.EncodedCQCounter;
 import org.apache.phoenix.schema.PTable.IndexType;
 import org.apache.phoenix.schema.PTable.LinkType;
+import org.apache.phoenix.schema.PTable.StorageScheme;
 import org.apache.phoenix.schema.PTable.ViewType;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoe

[29/50] [abbrv] phoenix git commit: PHOENIX-3416 Memory leak in PhoenixStorageHandler

2016-11-04 Thread samarth
PHOENIX-3416 Memory leak in PhoenixStorageHandler

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/46d4bb4c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/46d4bb4c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/46d4bb4c

Branch: refs/heads/encodecolumns2
Commit: 46d4bb4ca0a9f90316c3f36d397b36405d8766e7
Parents: 3bb1a2b
Author: Jeongdae Kim 
Authored: Thu Oct 27 20:50:53 2016 +0900
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:58:32 2016 -0700

--
 .../phoenix/hive/PhoenixStorageHandler.java | 14 +---
 .../hive/mapreduce/PhoenixInputFormat.java  | 37 +
 .../hive/ppd/PhoenixPredicateDecomposer.java| 15 +++-
 .../ppd/PhoenixPredicateDecomposerManager.java  | 83 
 4 files changed, 33 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/46d4bb4c/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
index e8b5b19..2bc8ace 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
@@ -40,8 +40,6 @@ import 
org.apache.phoenix.hive.constants.PhoenixStorageHandlerConstants;
 import org.apache.phoenix.hive.mapreduce.PhoenixInputFormat;
 import org.apache.phoenix.hive.mapreduce.PhoenixOutputFormat;
 import org.apache.phoenix.hive.ppd.PhoenixPredicateDecomposer;
-import org.apache.phoenix.hive.ppd.PhoenixPredicateDecomposerManager;
-import org.apache.phoenix.hive.util.PhoenixStorageHandlerUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 
 import java.util.List;
@@ -176,19 +174,9 @@ public class PhoenixStorageHandler extends 
DefaultStorageHandler implements
 public DecomposedPredicate decomposePredicate(JobConf jobConf, 
Deserializer deserializer,
   ExprNodeDesc predicate) {
 PhoenixSerDe phoenixSerDe = (PhoenixSerDe) deserializer;
-String tableName = phoenixSerDe.getTableProperties().getProperty
-(PhoenixStorageHandlerConstants.PHOENIX_TABLE_NAME);
-String predicateKey = 
PhoenixStorageHandlerUtil.getTableKeyOfSession(jobConf, tableName);
-
-if (LOG.isDebugEnabled()) {
-LOG.debug("Decomposing predicate with predicateKey : " + 
predicateKey);
-}
-
 List columnNameList = 
phoenixSerDe.getSerdeParams().getColumnNames();
-PhoenixPredicateDecomposer predicateDecomposer = 
PhoenixPredicateDecomposerManager
-.createPredicateDecomposer(predicateKey, columnNameList);
 
-return predicateDecomposer.decomposePredicate(predicate);
+return 
PhoenixPredicateDecomposer.create(columnNameList).decomposePredicate(predicate);
 }
 
 @Override

http://git-wip-us.apache.org/repos/asf/phoenix/blob/46d4bb4c/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
index 0944bb7..e3d0212 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/mapreduce/PhoenixInputFormat.java
@@ -32,15 +32,14 @@ import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.hbase.util.RegionSizeCalculator;
 import org.apache.hadoop.hive.conf.HiveConf;
+import org.apache.hadoop.hive.ql.exec.Utilities;
+import org.apache.hadoop.hive.ql.plan.ExprNodeGenericFuncDesc;
 import org.apache.hadoop.hive.ql.plan.TableScanDesc;
+import org.apache.hadoop.hive.serde.serdeConstants;
 import org.apache.hadoop.hive.serde2.ColumnProjectionUtils;
 import org.apache.hadoop.hive.shims.ShimLoader;
 import org.apache.hadoop.io.WritableComparable;
-import org.apache.hadoop.mapred.InputFormat;
-import org.apache.hadoop.mapred.InputSplit;
-import org.apache.hadoop.mapred.JobConf;
-import org.apache.hadoop.mapred.RecordReader;
-import org.apache.hadoop.mapred.Reporter;
+import org.apache.hadoop.mapred.*;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
 import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
@@ -48,7 +47,6 @@ import org.apache.phoenix.compile.QueryPlan;
 import org.apa

[40/50] [abbrv] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-04 Thread samarth
PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/83ed28f4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/83ed28f4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/83ed28f4

Branch: refs/heads/encodecolumns2
Commit: 83ed28f4e526171f1906c99e4fbc184c0b2e7569
Parents: d737ed3
Author: James Taylor 
Authored: Thu Nov 3 19:05:45 2016 -0700
Committer: James Taylor 
Committed: Thu Nov 3 19:05:45 2016 -0700

--
 .../apache/phoenix/end2end/IndexExtendedIT.java | 20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/83ed28f4/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 01b3012..161dcb8 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -44,6 +44,7 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.IndexTool;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.util.ByteUtil;
@@ -52,7 +53,9 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
+import org.junit.AfterClass;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
@@ -65,13 +68,18 @@ import com.google.common.collect.Maps;
  * Tests for the {@link IndexTool}
  */
 @RunWith(Parameterized.class)
-public class IndexExtendedIT extends BaseOwnClusterIT {
+public class IndexExtendedIT extends BaseTest {
 private final boolean localIndex;
 private final boolean transactional;
 private final boolean directApi;
 private final String tableDDLOptions;
 private final boolean mutable;
 
+@AfterClass
+public static void doTeardown() throws Exception {
+tearDownMiniCluster();
+}
+
 public IndexExtendedIT(boolean transactional, boolean mutable, boolean 
localIndex, boolean directApi) {
 this.localIndex = localIndex;
 this.transactional = transactional;
@@ -108,7 +116,7 @@ public class IndexExtendedIT extends BaseOwnClusterIT {
  { false, false, false, false }, { false, false, false, true 
}, { false, false, true, false }, { false, false, true, true }, 
  { false, true, false, false }, { false, true, false, true }, 
{ false, true, true, false }, { false, true, true, true }, 
  { true, false, false, false }, { true, false, false, true }, 
{ true, false, true, false }, { true, false, true, true }, 
- { true, true, false, false }, { true, true, false, true }, { 
true, true, true, false }, { true, true, true, true }
+ { true, true, false, false }, { true, true, false, true }, { 
true, true, true, false }, { true, true, true, true } 
});
 }
 
@@ -122,6 +130,9 @@ public class IndexExtendedIT extends BaseOwnClusterIT {
 if (!mutable || transactional) {
 return;
 }
+if (localIndex) { // FIXME: remove once this test works for local 
indexes
+return;
+}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -194,6 +205,9 @@ public class IndexExtendedIT extends BaseOwnClusterIT {
 
 @Test
 public void testSecondaryIndex() throws Exception {
+if (localIndex) { // FIXME: remove once this test works for local 
indexes
+return;
+}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -396,6 +410,7 @@ public class IndexExtendedIT extends BaseOwnClusterIT {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
+@Ignore
 @Test
 public void testLocalIndexScanAfterRegionSplit() throws Exception {
 // This test just needs be run once
@@ -497,6 +512,7 @@ public class IndexExtend

[24/50] [abbrv] phoenix git commit: Set version to 4.9.0-HBase-0.98 for release

2016-11-04 Thread samarth
Set version to 4.9.0-HBase-0.98 for release


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/00fc6f67
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/00fc6f67
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/00fc6f67

Branch: refs/heads/encodecolumns2
Commit: 00fc6f6769a3584a4dcbf46df9ae572f9a248f22
Parents: e63f6d6
Author: Mujtaba 
Authored: Mon Oct 31 11:46:12 2016 -0700
Committer: Mujtaba 
Committed: Mon Oct 31 11:46:12 2016 -0700

--
 dev/make_rc.sh | 2 +-
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 14 files changed, 14 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/dev/make_rc.sh
--
diff --git a/dev/make_rc.sh b/dev/make_rc.sh
index 705ad04..4cc758f 100755
--- a/dev/make_rc.sh
+++ b/dev/make_rc.sh
@@ -73,7 +73,7 @@ mvn clean apache-rat:check package -DskipTests 
-Dcheckstyle.skip=true -q;
 rm -rf $(find . -type d -name archive-tmp);
 
 # Copy all phoenix-*.jars to release dir
-phx_jars=$(find -iname phoenix-*.jar)
+phx_jars=$(find -iwholename "./*/target/phoenix-*.jar")
 cp $phx_jars $DIR_REL_BIN_PATH;
 
 # Copy bin

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 89e188c..8ff1618 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98-SNAPSHOT
+4.9.0-HBase-0.98
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index a78fa11..2c30342 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98-SNAPSHOT
+4.9.0-HBase-0.98
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index b01787c..ea3f316 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98-SNAPSHOT
+4.9.0-HBase-0.98
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index e99c460..236e06a 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98-SNAPSHOT
+4.9.0-HBase-0.98
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index d1c47ff..250db49 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98-SNAPSHOT
+4.9.0-HBase-0.98
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index 407ee48..bf74445 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.9.0-HBase-0.98-SNAPSHOT
+   4.9.0-HBase-0.98

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00fc6f67/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 16f5b6f..6292d81 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 

[33/50] [abbrv] phoenix git commit: PHOENIX-3423 PhoenixObjectInspector doesn't have information on length of the column.

2016-11-04 Thread samarth
PHOENIX-3423 PhoenixObjectInspector doesn't have information on length of the 
column.

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c1c78b2e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c1c78b2e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c1c78b2e

Branch: refs/heads/encodecolumns2
Commit: c1c78b2e41ced31017f978aa3fe356aaf7d42b7d
Parents: a225f5f
Author: Jeongdae Kim 
Authored: Mon Oct 31 12:36:00 2016 +0900
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:59:10 2016 -0700

--
 .../hive/objectinspector/PhoenixCharObjectInspector.java  | 7 ++-
 .../hive/objectinspector/PhoenixObjectInspectorFactory.java   | 2 +-
 2 files changed, 7 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c1c78b2e/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
index 8d6aa8c..17222a2 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixCharObjectInspector.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.hive.objectinspector;
 import org.apache.hadoop.hive.common.type.HiveChar;
 import org.apache.hadoop.hive.serde2.io.HiveCharWritable;
 import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.HiveCharObjectInspector;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
 
 /**
@@ -29,7 +30,11 @@ public class PhoenixCharObjectInspector extends 
AbstractPhoenixObjectInspectorhttp://git-wip-us.apache.org/repos/asf/phoenix/blob/c1c78b2e/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
index 22be0fc..3a19ea7 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
@@ -102,7 +102,7 @@ public class PhoenixObjectInspectorFactory {
 serdeParams.getEscapeChar());
 break;
 case CHAR:
-oi = new PhoenixCharObjectInspector();
+oi = new 
PhoenixCharObjectInspector((PrimitiveTypeInfo)type);
 break;
 case DATE:
 oi = new PhoenixDateObjectInspector();



[06/50] [abbrv] phoenix git commit: PHOENIX-3414 Validate DEFAULT when used in ALTER statement

2016-11-04 Thread samarth
PHOENIX-3414 Validate DEFAULT when used in ALTER statement


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7f5d79ad
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7f5d79ad
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7f5d79ad

Branch: refs/heads/encodecolumns2
Commit: 7f5d79adef4e6733ac29f7ed60261383ade0c6ff
Parents: 5ea0921
Author: James Taylor 
Authored: Wed Oct 26 18:35:12 2016 -0700
Committer: James Taylor 
Committed: Wed Oct 26 18:49:29 2016 -0700

--
 .../phoenix/end2end/DefaultColumnValueIT.java   |  37 ++-
 .../phoenix/compile/CreateTableCompiler.java|  60 +--
 .../org/apache/phoenix/parse/ColumnDef.java |  65 
 .../apache/phoenix/schema/MetaDataClient.java   |   4 +
 .../phoenix/compile/QueryCompilerTest.java  | 102 ++-
 5 files changed, 210 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7f5d79ad/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
index ea9df50..783dd75 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
@@ -57,12 +57,12 @@ public class DefaultColumnValueIT extends 
ParallelStatsDisabledIT {
 "pk2 INTEGER NOT NULL, " +
 "pk3 INTEGER NOT NULL DEFAULT 10, " +
 "test1 INTEGER, " +
-"test2 INTEGER DEFAULT 5, " +
-"test3 INTEGER, " +
 "CONSTRAINT NAME_PK PRIMARY KEY (pk1, pk2, pk3))";
 
 Connection conn = DriverManager.getConnection(getUrl());
 conn.createStatement().execute(ddl);
+conn.createStatement().execute("ALTER TABLE " + sharedTable1 + 
+" ADD test2 INTEGER DEFAULT 5, est3 INTEGER");
 
 String dml = "UPSERT INTO " + sharedTable1 + " VALUES (1, 2)";
 conn.createStatement().execute(dml);
@@ -100,6 +100,39 @@ public class DefaultColumnValueIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
+public void testDefaultColumnValueOnView() throws Exception {
+String ddl = "CREATE TABLE IF NOT EXISTS " + sharedTable1 + " (" +
+"pk1 INTEGER NOT NULL, " +
+"pk2 INTEGER NOT NULL, " +
+"pk3 INTEGER NOT NULL DEFAULT 10, " +
+"test1 INTEGER, " +
+"test2 INTEGER DEFAULT 5, " +
+"test3 INTEGER, " +
+"CONSTRAINT NAME_PK PRIMARY KEY (pk1, pk2, pk3))";
+
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute(ddl);
+conn.createStatement().execute("CREATE VIEW " + sharedTable2 + 
+"(pk4 INTEGER NOT NULL DEFAULT 20 PRIMARY KEY, test4 VARCHAR 
DEFAULT 'foo') " +
+"AS SELECT * FROM " + sharedTable1 + " WHERE pk1 = 1");
+
+String dml = "UPSERT INTO " + sharedTable2 + "(pk2) VALUES (2)";
+conn.createStatement().execute(dml);
+conn.commit();
+
+ResultSet rs = conn.createStatement()
+.executeQuery("SELECT pk1,pk2,pk3,pk4,test2,test4 FROM " + 
sharedTable2);
+assertTrue(rs.next());
+assertEquals(1, rs.getInt(1));
+assertEquals(2, rs.getInt(2));
+assertEquals(10, rs.getInt(3));
+assertEquals(20, rs.getInt(4));
+assertEquals(5, rs.getInt(5));
+assertEquals("foo", rs.getString(6));
+assertFalse(rs.next());
+}
+
+@Test
 public void testDefaultColumnValueProjected() throws Exception {
 String ddl = "CREATE TABLE IF NOT EXISTS " + sharedTable1 + " (" +
 "pk1 INTEGER NOT NULL, " +

http://git-wip-us.apache.org/repos/asf/phoenix/blob/7f5d79ad/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index 3cabfbb..07df105 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -54,8 +54,6 @@ import org.apache.phoenix.parse.TableName;
 import org.apache.phoenix.query.DelegateConnectionQueryServices;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.ColumnRef;
-import org.apach

[25/50] [abbrv] phoenix git commit: PHOENIX-3426 Fix the broken QueryServerBasicsIT

2016-11-04 Thread samarth
PHOENIX-3426 Fix the broken QueryServerBasicsIT

For integration tests with MiniHBaseCluster, we have to
deal with multiple versions of protobuf on the classpath.
As such, it's easier to use the shaded artifact from Avatica
instead of re-shading that in Phoenix and trying to keep
the Phoenix classes off the test classpath.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d45feaed
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d45feaed
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d45feaed

Branch: refs/heads/encodecolumns2
Commit: d45feaedbe3611cbc38e8a053f5560f391bcd2b8
Parents: 00fc6f6
Author: Josh Elser 
Authored: Mon Oct 31 18:38:45 2016 -0400
Committer: Josh Elser 
Committed: Mon Oct 31 18:50:26 2016 -0400

--
 phoenix-queryserver/pom.xml | 33 ++---
 pom.xml |  5 +
 2 files changed, 15 insertions(+), 23 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d45feaed/phoenix-queryserver/pom.xml
--
diff --git a/phoenix-queryserver/pom.xml b/phoenix-queryserver/pom.xml
index 83f7ee2..81ee77d 100644
--- a/phoenix-queryserver/pom.xml
+++ b/phoenix-queryserver/pom.xml
@@ -36,7 +36,6 @@
   
 ${project.basedir}/..
 org.apache.phoenix.shaded
-3.1.0
   
 
   
@@ -87,11 +86,6 @@
   org.apache.calcite.avatica:*
   org.eclipse.jetty:*
   javax.servlet:*
-  org.apache.httpcomponents:*
-  commons-codec:*
-  commons-logging:*
-  com.google.protobuf:*
-  com.fasterxml.jackson.core:*
 
   
   
@@ -112,22 +106,6 @@
   org.eclipse.jetty
   
${shaded.package}.org.eclipse.jetty
 
-
-  com.google.protobuf
-  
${shaded.package}.com.google.protobuf
-
-
-  com.fasterxml.jackson
-  
${shaded.package}.com.fasterxml.jackson
-
-
-  org.apache.commons
-  
${shaded.package}.org.apache.commons
-
-
-  org.apache.http
-  
${shaded.package}.org.apache.http
-
 
@@ -143,10 +121,19 @@
 
   org.apache.phoenix
   phoenix-queryserver-client
+  
+
+
+  org.apache.calcite.avatica
+  avatica-core
+
+  
 
 
   org.apache.calcite.avatica
-  avatica-core
+  avatica
 
 
   org.apache.calcite.avatica

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d45feaed/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 20e80c6..989e2e5 100644
--- a/pom.xml
+++ b/pom.xml
@@ -710,6 +710,11 @@
   
   
 org.apache.calcite.avatica
+avatica
+${avatica.version}
+  
+  
+org.apache.calcite.avatica
 avatica-core
 ${avatica.version}
   



[20/50] [abbrv] phoenix git commit: PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade

2016-11-04 Thread samarth
PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/377ef938
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/377ef938
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/377ef938

Branch: refs/heads/encodecolumns2
Commit: 377ef938c52d020837b10ba2a2afef0b03b56c1c
Parents: 3c80432
Author: James Taylor 
Authored: Sun Oct 30 08:32:14 2016 -0700
Committer: James Taylor 
Committed: Sun Oct 30 08:35:22 2016 -0700

--
 .../query/ConnectionQueryServicesImpl.java  | 68 
 1 file changed, 42 insertions(+), 26 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/377ef938/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 62ee2bf..ff4e404 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -25,6 +25,7 @@ import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERS
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
 import static org.apache.phoenix.coprocessor.MetaDataProtocol.getVersion;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_STATS_NAME;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
@@ -37,6 +38,7 @@ import static 
org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
 
 import java.io.IOException;
 import java.lang.ref.WeakReference;
+import java.sql.PreparedStatement;
 import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.ArrayList;
@@ -194,7 +196,6 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.Closeables;
 import org.apache.phoenix.util.ConfigUtil;
 import org.apache.phoenix.util.JDBCUtil;
-import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixContextExecutor;
 import org.apache.phoenix.util.PhoenixRuntime;
@@ -2271,30 +2272,44 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
 }
 
-private void removeNotNullConstraint(String schemaName, String tableName, 
long timestamp, String columnName) throws SQLException {
-try (HTableInterface htable = 
this.getTable(SYSTEM_CATALOG_NAME_BYTES)) {
-byte[] tableRowKey = SchemaUtil.getTableKey(null, schemaName, 
tableName);
-Put tableHeader = new Put(tableRowKey);
-tableHeader.add(KeyValueUtil.newKeyValue(tableRowKey, 
-QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES, 
-QueryConstants.EMPTY_COLUMN_BYTES, 
-timestamp, 
-QueryConstants.EMPTY_COLUMN_VALUE_BYTES));
-byte[] columnRowKey = SchemaUtil.getColumnKey(null, schemaName, 
tableName, columnName, null);
-Put tableColumn = new Put(columnRowKey);
-tableColumn.add(KeyValueUtil.newKeyValue(columnRowKey,
-QueryConstants.DEFAULT_COLUMN_FAMILY_BYTES,
-PhoenixDatabaseMetaData.NULLABLE_BYTES,
-timestamp,
-
PInteger.INSTANCE.toBytes(ResultSetMetaData.columnNullable)));
-List mutations = 
Lists.newArrayList(tableHeader, tableColumn);
-htable.batch(mutations, new Object[mutations.size()]);
-} catch (IOException e) {
-throw new SQLException(e);
-} catch (InterruptedException e) {
-Thread.currentThread().isInterrupted();
-throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.INTERRUPTED_EXCEPTION).setRootCause(e).build().buildException();
+private PhoenixConnection removeNotNullConstraint(PhoenixConnection 
oldMetaConnection, String schemaName, String tableName, long timestamp, String 
columnName) throws SQLException {
+Properties props = 
PropertiesUtil.deepCopy(oldMetaConnection.getClientInfo());
+props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timestamp));
+// Cannot go through DriverManager or you end up in an infinite loop 
because it'll call init again
+PhoenixCo

[01/50] [abbrv] phoenix git commit: PHOENIX-3371 Aggregate Function is broken if secondary index exists [Forced Update!]

2016-11-04 Thread samarth
Repository: phoenix
Updated Branches:
  refs/heads/encodecolumns2 8c31c93ab -> ede568e9c (forced update)


PHOENIX-3371 Aggregate Function is broken if secondary index exists


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c40fa013
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c40fa013
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c40fa013

Branch: refs/heads/encodecolumns2
Commit: c40fa013b14bd6f41d8a4b0d2f18e4d918aeb0c6
Parents: 9b851d5
Author: James Taylor 
Authored: Wed Oct 12 19:14:55 2016 -0700
Committer: James Taylor 
Committed: Tue Oct 25 16:41:01 2016 -0700

--
 .../phoenix/end2end/index/ViewIndexIT.java  | 97 
 .../apache/phoenix/schema/MetaDataClient.java   |  7 +-
 2 files changed, 99 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c40fa013/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
index 9e63093..99c8d2b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end.index;
 
 import static org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceName;
 import static 
org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceSchemaName;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -34,6 +35,7 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
+import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.QueryPlan;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
@@ -42,6 +44,8 @@ import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.MetaDataUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
@@ -284,4 +288,97 @@ public class ViewIndexIT extends ParallelStatsDisabledIT {
 assertEquals(6, plan.getSplits().size());
 }
 }
+
+private void assertRowCount(Connection conn, String fullTableName, String 
fullBaseName, int expectedCount) throws SQLException {
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+ResultSet rs = stmt.executeQuery("SELECT COUNT(*) FROM " + 
fullTableName);
+assertTrue(rs.next());
+assertEquals(expectedCount, rs.getInt(1));
+// Ensure that index is being used
+rs = stmt.executeQuery("EXPLAIN SELECT COUNT(*) FROM " + 
fullTableName);
+// Uses index and finds correct number of rows
+assertEquals("CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
Bytes.toString(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(fullBaseName)))
 + " [-32768,'123451234512345']\n" + 
+"SERVER FILTER BY FIRST KEY ONLY\n" + 
+"SERVER AGGREGATE INTO SINGLE ROW",
+QueryUtil.getExplainPlan(rs));
+
+// Force it not to use index and still finds correct number of rows
+rs = stmt.executeQuery("SELECT /*+ NO_INDEX */ * FROM " + 
fullTableName);
+int count = 0;
+while (rs.next()) {
+count++;
+}
+
+assertEquals(expectedCount, count);
+// Ensure that the table, not index is being used
+assertEquals(fullTableName, 
stmt.getQueryPlan().getContext().getCurrentTable().getTable().getName().getString());
+}
+
+@Test
+public void testUpdateOnTenantViewWithGlobalView() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String baseSchemaName = "PLATFORM_ENTITY";
+String baseTableName = generateUniqueName();
+String baseFullName = SchemaUtil.getTableName(baseSchemaName, 
baseTableName);
+String viewTableName = "V_" + generateUniqueName();
+String viewFullName = SchemaUtil.getTableName(baseSchemaName, 
viewTableName);
+String indexName = "I_" + generateUniqueName();
+String tsViewTableName = "TSV_" + genera

[11/50] [abbrv] phoenix git commit: PHOENIX-6 Support ON DUPLICATE KEY construct

2016-11-04 Thread samarth
PHOENIX-6 Support ON DUPLICATE KEY construct


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e2325a41
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e2325a41
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e2325a41

Branch: refs/heads/encodecolumns2
Commit: e2325a413d2b44f1432b30b7fd337643793cbd21
Parents: 613a5b7
Author: James Taylor 
Authored: Thu Oct 27 11:20:20 2016 -0700
Committer: James Taylor 
Committed: Thu Oct 27 14:03:28 2016 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 523 +++
 .../phoenix/end2end/index/IndexTestUtil.java|   6 +-
 .../org/apache/phoenix/tx/TransactionIT.java|  15 +
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |  24 +-
 .../apache/phoenix/compile/DeleteCompiler.java  |   6 +-
 .../apache/phoenix/compile/UpsertCompiler.java  | 104 +++-
 .../UngroupedAggregateRegionObserver.java   |   2 +-
 .../phoenix/exception/SQLExceptionCode.java |   6 +
 .../apache/phoenix/execute/MutationState.java   |  32 +-
 .../org/apache/phoenix/hbase/index/Indexer.java |  98 +++-
 .../hbase/index/builder/BaseIndexBuilder.java   |  14 +-
 .../hbase/index/builder/IndexBuildManager.java  |  10 +
 .../hbase/index/builder/IndexBuilder.java   |  29 +-
 .../phoenix/hbase/index/covered/IndexCodec.java |   1 -
 .../hbase/index/util/KeyValueBuilder.java   |  15 +-
 .../phoenix/index/PhoenixIndexBuilder.java  | 318 +++
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  11 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |   7 +-
 .../apache/phoenix/parse/UpsertStatement.java   |  10 +-
 .../apache/phoenix/schema/DelegateColumn.java   |  10 +
 .../apache/phoenix/schema/DelegateTable.java|  18 +-
 .../org/apache/phoenix/schema/PColumnImpl.java  |  12 +-
 .../java/org/apache/phoenix/schema/PRow.java|  11 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   6 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  48 +-
 .../org/apache/phoenix/util/ExpressionUtil.java |   1 -
 .../phoenix/compile/QueryCompilerTest.java  | 104 +++-
 27 files changed, 1321 insertions(+), 120 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e2325a41/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
new file mode 100644
index 000..9a81026
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -0,0 +1,523 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Lists;
+
+@RunWith(Parameterized.class)
+public class OnDuplicateKeyIT extends ParallelStatsDisabledIT {
+private final String indexDDL;
+
+public OnDuplicateKeyIT(String indexDDL) {
+this.indexDDL = indexDDL;
+}
+
+@Parameters
+public static Collection data() {
+List testCases = Lists.newArrayList();
+testCases.add(new String[] {
+"",
+});
+testCases.add(new String[] {

[17/50] [abbrv] phoenix git commit: PHOENIX-3375 Upgrade from v4.8.1 to 4.9.0 fails

2016-11-04 Thread samarth
PHOENIX-3375 Upgrade from v4.8.1 to 4.9.0 fails


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/030fb768
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/030fb768
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/030fb768

Branch: refs/heads/encodecolumns2
Commit: 030fb7684e5eebc11d95973abf8e22606b9baa31
Parents: 16e4a18
Author: Samarth 
Authored: Fri Oct 28 12:55:05 2016 -0700
Committer: Samarth 
Committed: Fri Oct 28 12:55:05 2016 -0700

--
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  62 ---
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   5 +
 .../query/ConnectionQueryServicesImpl.java  | 104 ++-
 .../apache/phoenix/query/QueryConstants.java|   1 -
 4 files changed, 132 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/030fb768/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
index d37419b..d377bd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
@@ -37,7 +37,9 @@ import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collections;
 import java.util.Properties;
+import java.util.concurrent.Callable;
 import java.util.concurrent.CountDownLatch;
+import java.util.concurrent.FutureTask;
 import java.util.concurrent.atomic.AtomicBoolean;
 import java.util.concurrent.atomic.AtomicInteger;
 
@@ -70,6 +72,8 @@ import org.junit.Test;
 public class UpgradeIT extends ParallelStatsDisabledIT {
 
 private String tenantId;
+private static final byte[] mutexRowKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA,
+PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE);
 
 @Before
 public void generateTenantId() {
@@ -693,27 +697,64 @@ public class UpgradeIT extends ParallelStatsDisabledIT {
 }
 
 @Test
+public void testAcquiringAndReleasingUpgradeMutex() throws Exception {
+ConnectionQueryServices services = null;
+try (Connection conn = getConnection(false, null)) {
+services = conn.unwrap(PhoenixConnection.class).getQueryServices();
+assertTrue(((ConnectionQueryServicesImpl)services)
+
.acquireUpgradeMutex(MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_7_0, 
mutexRowKey));
+try {
+((ConnectionQueryServicesImpl)services)
+
.acquireUpgradeMutex(MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_7_0, 
mutexRowKey);
+fail();
+} catch (UpgradeInProgressException expected) {
+
+}
+
assertTrue(((ConnectionQueryServicesImpl)services).releaseUpgradeMutex(mutexRowKey));
+
assertFalse(((ConnectionQueryServicesImpl)services).releaseUpgradeMutex(mutexRowKey));
+}
+}
+
+@Test
 public void testConcurrentUpgradeThrowsUprgadeInProgressException() throws 
Exception {
 final AtomicBoolean mutexStatus1 = new AtomicBoolean(false);
 final AtomicBoolean mutexStatus2 = new AtomicBoolean(false);
 final CountDownLatch latch = new CountDownLatch(2);
 final AtomicInteger numExceptions = new AtomicInteger(0);
+ConnectionQueryServices services = null;
 try (Connection conn = getConnection(false, null)) {
-final ConnectionQueryServices services = 
conn.unwrap(PhoenixConnection.class).getQueryServices();
-Thread t1 = new Thread(new AcquireMutexRunnable(mutexStatus1, 
services, latch, numExceptions));
+services = conn.unwrap(PhoenixConnection.class).getQueryServices();
+FutureTask task1 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus1, services, latch, numExceptions));
+FutureTask task2 = new FutureTask<>(new 
AcquireMutexRunnable(mutexStatus2, services, latch, numExceptions));
+Thread t1 = new Thread(task1);
 t1.setDaemon(true);
-Thread t2 = new Thread(new AcquireMutexRunnable(mutexStatus2, 
services, latch, numExceptions));
-t2.setDaemon(true);;
+Thread t2 = new Thread(task2);
+t2.setDaemon(true);
 t1.start();
 t2.start();
 latch.await();
+// make sure tasks didn't fail by calling get()
+task1.get();
+task2.get();
 assertTrue("One of the threads should have acquired the mutex", 
mutexStatus1.get() || mutexStatus2.get());
-assertNotEquals("One and onl

[35/50] [abbrv] phoenix git commit: PHOENIX-3208 MutationState.toMutations method would throw a exception if multiple tables are upserted (chenglei)

2016-11-04 Thread samarth
PHOENIX-3208 MutationState.toMutations method would throw a exception if 
multiple tables are upserted (chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ecb9360f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ecb9360f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ecb9360f

Branch: refs/heads/encodecolumns2
Commit: ecb9360f61efba077a70880d472602e3768b0935
Parents: 83b0ebe
Author: James Taylor 
Authored: Wed Nov 2 09:59:06 2016 -0700
Committer: James Taylor 
Committed: Wed Nov 2 13:23:28 2016 -0700

--
 .../apache/phoenix/execute/MutationState.java   |  3 +-
 .../phoenix/execute/MutationStateTest.java  | 75 
 2 files changed, 77 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ecb9360f/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
index 9d1344b..cb66968 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/MutationState.java
@@ -706,7 +706,7 @@ public class MutationState implements SQLCloseable {
 }
 
 @Override
-public Pair> next() {
+ public Pair> next() {
 Pair> pair = 
mutationIterator.next();
 return new Pair>(pair.getFirst().getBytes(), pair.getSecond());
 }
@@ -727,6 +727,7 @@ public class MutationState implements SQLCloseable {
 public Pair> next() {
 if (!innerIterator.hasNext()) {
 current = iterator.next();
+innerIterator=init();
 }
 return innerIterator.next();
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ecb9360f/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
index 4c596ad..276d946 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
@@ -20,7 +20,20 @@ package org.apache.phoenix.execute;
 import static org.apache.phoenix.execute.MutationState.joinSortedIntArrays;
 import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
 
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Iterator;
+import java.util.List;
+
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.schema.types.PUnsignedInt;
+import org.apache.phoenix.schema.types.PVarchar;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.junit.Test;
 
 public class MutationStateTest {
@@ -59,4 +72,66 @@ public class MutationStateTest {
 assertEquals(4, result.length);
 assertArrayEquals(new int[] {1,2,3,4}, result);
 }
+
+private static String getUrl() {
+return PhoenixRuntime.JDBC_PROTOCOL + 
PhoenixRuntime.JDBC_PROTOCOL_SEPARATOR + PhoenixRuntime.CONNECTIONLESS;
+}
+
+@Test
+public void testToMutationsOverMultipleTables() throws Exception {
+Connection conn = null;
+try {
+conn=DriverManager.getConnection(getUrl());
+conn.createStatement().execute(
+"create table MUTATION_TEST1"+
+"( id1 UNSIGNED_INT not null primary key,"+
+"appId1 VARCHAR)");
+conn.createStatement().execute(
+"create table MUTATION_TEST2"+
+"( id2 UNSIGNED_INT not null primary key,"+
+"appId2 VARCHAR)");
+
+conn.createStatement().execute("upsert into 
MUTATION_TEST1(id1,appId1) values(111,'app1')");
+conn.createStatement().execute("upsert into 
MUTATION_TEST2(id2,appId2) values(222,'app2')");
+
+
+Iterator>> 
dataTableNameAndMutationKeyValuesIter =
+PhoenixRuntime.getUncommittedDataIterator(conn);
+
+
+assertTrue(dataTableNameAndMutationKeyValuesIter.hasNext());
+Pair> 
pair=dataTableNameAndMutationKeyValuesIter.n

[19/50] [abbrv] phoenix git commit: PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade

2016-11-04 Thread samarth
PHOENIX-3424 Backward compatibility failure: 4.8 -> 4.9 upgrade


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3c804320
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3c804320
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3c804320

Branch: refs/heads/encodecolumns2
Commit: 3c804320c9a64f01aedc6e73fc3df75938ba30e2
Parents: 6ef3a3f
Author: James Taylor 
Authored: Sat Oct 29 17:32:29 2016 -0700
Committer: James Taylor 
Committed: Sat Oct 29 17:34:34 2016 -0700

--
 .../phoenix/coprocessor/MetaDataProtocol.java   |  2 +-
 .../query/ConnectionQueryServicesImpl.java  | 69 +++-
 .../org/apache/phoenix/util/SchemaUtil.java | 15 +
 3 files changed, 41 insertions(+), 45 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3c804320/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index 4f0a34c..83290db 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -84,7 +84,7 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_7_0 = 
MIN_TABLE_TIMESTAMP + 15;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_8_0 = 
MIN_TABLE_TIMESTAMP + 18;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_8_1 = 
MIN_TABLE_TIMESTAMP + 18;
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_9_0 = 
MIN_TABLE_TIMESTAMP + 19;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_9_0 = 
MIN_TABLE_TIMESTAMP + 20;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_9_0;
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3c804320/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 1773175..62ee2bf 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -25,7 +25,6 @@ import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERS
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
 import static org.apache.phoenix.coprocessor.MetaDataProtocol.getVersion;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_STATS_NAME;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_DROP_METADATA;
@@ -38,10 +37,8 @@ import static 
org.apache.phoenix.util.UpgradeUtil.upgradeTo4_5_0;
 
 import java.io.IOException;
 import java.lang.ref.WeakReference;
-import java.sql.PreparedStatement;
 import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
-import java.sql.Types;
 import java.util.ArrayList;
 import java.util.Arrays;
 import java.util.Collection;
@@ -197,6 +194,7 @@ import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.Closeables;
 import org.apache.phoenix.util.ConfigUtil;
 import org.apache.phoenix.util.JDBCUtil;
+import org.apache.phoenix.util.KeyValueUtil;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixContextExecutor;
 import org.apache.phoenix.util.PhoenixRuntime;
@@ -2273,46 +2271,30 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
 }
 
-private PhoenixConnection removeNotNullConstraint(PhoenixConnection 
oldMetaConnection, String schemaName, String tableName, long timestamp, String 
columnName) throws SQLException {
-Properties props = 
PropertiesUtil.deepCopy(oldMetaConnection.getClientInfo());
-props.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(timestamp));
-// Cannot go through DriverManager or you end up in an infinite loop 
because it'll call init again
-PhoenixConnection metaConnect

[12/50] [abbrv] phoenix git commit: PHOENIX-3396 Valid Multi-byte strings whose total byte size is greater than the max char limit cannot be inserted into VARCHAR fields in the PK

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb88e9f5/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
index 17910de..9fff730 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/types/PDecimal.java
@@ -35,384 +35,385 @@ import com.google.common.base.Preconditions;
 
 public class PDecimal extends PRealNumber {
 
-  public static final PDecimal INSTANCE = new PDecimal();
+public static final PDecimal INSTANCE = new PDecimal();
 
-  private static final BigDecimal MIN_DOUBLE_AS_BIG_DECIMAL =
-  BigDecimal.valueOf(-Double.MAX_VALUE);
-  private static final BigDecimal MAX_DOUBLE_AS_BIG_DECIMAL =
-  BigDecimal.valueOf(Double.MAX_VALUE);
-  private static final BigDecimal MIN_FLOAT_AS_BIG_DECIMAL =
-  BigDecimal.valueOf(-Float.MAX_VALUE);
-  private static final BigDecimal MAX_FLOAT_AS_BIG_DECIMAL =
-  BigDecimal.valueOf(Float.MAX_VALUE);
+private static final BigDecimal MIN_DOUBLE_AS_BIG_DECIMAL =
+BigDecimal.valueOf(-Double.MAX_VALUE);
+private static final BigDecimal MAX_DOUBLE_AS_BIG_DECIMAL =
+BigDecimal.valueOf(Double.MAX_VALUE);
+private static final BigDecimal MIN_FLOAT_AS_BIG_DECIMAL =
+BigDecimal.valueOf(-Float.MAX_VALUE);
+private static final BigDecimal MAX_FLOAT_AS_BIG_DECIMAL =
+BigDecimal.valueOf(Float.MAX_VALUE);
 
-  private PDecimal() {
-super("DECIMAL", Types.DECIMAL, BigDecimal.class, null, 8);
-  }
-
-  @Override
-  public byte[] toBytes(Object object) {
-if (object == null) {
-  return ByteUtil.EMPTY_BYTE_ARRAY;
+private PDecimal() {
+super("DECIMAL", Types.DECIMAL, BigDecimal.class, null, 8);
 }
-BigDecimal v = (BigDecimal) object;
-v = NumberUtil.normalize(v);
-int len = getLength(v);
-byte[] result = new byte[Math.min(len, MAX_BIG_DECIMAL_BYTES)];
-PDataType.toBytes(v, result, 0, len);
-return result;
-  }
 
-  @Override
-  public int toBytes(Object object, byte[] bytes, int offset) {
-if (object == null) {
-  return 0;
+@Override
+public byte[] toBytes(Object object) {
+if (object == null) {
+return ByteUtil.EMPTY_BYTE_ARRAY;
+}
+BigDecimal v = (BigDecimal) object;
+v = NumberUtil.normalize(v);
+int len = getLength(v);
+byte[] result = new byte[Math.min(len, MAX_BIG_DECIMAL_BYTES)];
+PDataType.toBytes(v, result, 0, len);
+return result;
 }
-BigDecimal v = (BigDecimal) object;
-v = NumberUtil.normalize(v);
-int len = getLength(v);
-return PDataType.toBytes(v, bytes, offset, len);
-  }
 
-  private int getLength(BigDecimal v) {
-int signum = v.signum();
-if (signum == 0) { // Special case for zero
-  return 1;
+@Override
+public int toBytes(Object object, byte[] bytes, int offset) {
+if (object == null) {
+return 0;
+}
+BigDecimal v = (BigDecimal) object;
+v = NumberUtil.normalize(v);
+int len = getLength(v);
+return PDataType.toBytes(v, bytes, offset, len);
 }
-/*
- * Size of DECIMAL includes:
- * 1) one byte for exponent
- * 2) one byte for terminal byte if negative
- * 3) one byte for every two digits with the following caveats:
- *a) add one to round up in the case when there is an odd 
number of digits
- *b) add one in the case that the scale is odd to account for 
10x of lowest significant digit
- *   (basically done to increase the range of exponents that 
can be represented)
- */
-return (signum < 0 ? 2 : 1) + (v.precision() + 1 + (v.scale() % 2 == 0 ? 0 
: 1)) / 2;
-  }
 
-  @Override
-  public int estimateByteSize(Object o) {
-if (o == null) {
-  return 1;
+private int getLength(BigDecimal v) {
+int signum = v.signum();
+if (signum == 0) { // Special case for zero
+return 1;
+}
+/*
+ * Size of DECIMAL includes:
+ * 1) one byte for exponent
+ * 2) one byte for terminal byte if negative
+ * 3) one byte for every two digits with the following caveats:
+ *a) add one to round up in the case when there is an odd number 
of digits
+ *b) add one in the case that the scale is odd to account for 10x 
of lowest significant digit
+ *   (basically done to increase the range of exponents that can 
be represented)
+ */
+return (signum < 0 ? 2 : 1) + (v.precision() + 1 + (v.scale() % 2 == 0 
? 0 : 1)) / 2;
 }
-BigDecimal v = (BigDecimal) o;
-// T

[05/50] [abbrv] phoenix git commit: PHOENIX-476 Support declaration of DEFAULT in CREATE statement (Kevin Liew)

2016-11-04 Thread samarth
PHOENIX-476 Support declaration of DEFAULT in CREATE statement (Kevin Liew)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5ea09210
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5ea09210
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5ea09210

Branch: refs/heads/encodecolumns2
Commit: 5ea0921051ee58c79b41e1d74cf27b957fdefc9c
Parents: 9ebd092
Author: James Taylor 
Authored: Wed Oct 26 15:27:33 2016 -0700
Committer: James Taylor 
Committed: Wed Oct 26 18:48:29 2016 -0700

--
 .../phoenix/end2end/DefaultColumnValueIT.java   | 1037 ++
 .../phoenix/iterate/MockResultIterator.java |2 +-
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |8 +-
 .../phoenix/compile/CreateTableCompiler.java|   78 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   |8 +-
 .../UngroupedAggregateRegionObserver.java   |2 +-
 .../phoenix/exception/SQLExceptionCode.java |3 +-
 .../apache/phoenix/execute/TupleProjector.java  |3 +-
 .../phoenix/expression/ExpressionType.java  |  115 +-
 .../function/DefaultValueExpression.java|   91 ++
 .../org/apache/phoenix/parse/ColumnDef.java |   14 +
 .../phoenix/parse/CreateSchemaStatement.java|4 +-
 .../phoenix/parse/CreateTableStatement.java |   13 +
 .../apache/phoenix/parse/ParseNodeFactory.java  |   14 +-
 .../phoenix/parse/UseSchemaStatement.java   |4 +-
 .../org/apache/phoenix/schema/ColumnRef.java|   42 +-
 .../phoenix/schema/DelegateSQLException.java|   62 ++
 .../apache/phoenix/schema/MetaDataClient.java   |8 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|2 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |   55 +-
 .../apache/phoenix/schema/types/PBinary.java|   10 +
 .../phoenix/compile/QueryCompilerTest.java  |   78 ++
 .../phoenix/compile/WhereCompilerTest.java  |2 +-
 24 files changed, 1512 insertions(+), 145 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5ea09210/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
new file mode 100644
index 000..ea9df50
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
@@ -0,0 +1,1037 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertArrayEquals;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.math.BigDecimal;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Time;
+import java.sql.Timestamp;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.DateUtil;
+import org.junit.Before;
+import org.junit.Test;
+
+
+public class DefaultColumnValueIT extends ParallelStatsDisabledIT {
+private String sharedTable1;
+private String sharedTable2;
+
+@Before
+public void init() {
+sharedTable1 = generateUniqueName();
+sharedTable2 = generateUniqueName();
+}
+
+@Test
+public void testDefaultColumnValue() throws Exception {
+String ddl = "CREATE TABLE IF NOT EXISTS " + sharedTable1 + " (" +
+"pk1 INTEGER NOT NULL, " +
+"pk2 INTEGER NOT NULL, " +
+"pk3 INTEGER NOT NULL DEFAULT 10, " +
+"test1 INTEGER, " +
+"test2 INTEGER DEFAULT 5, " +
+"test3 INTEGER, " +
+"CONSTRAINT NA

[07/50] [abbrv] phoenix git commit: PHOENIX-3412 Used the batch JDBC APIs in pherf.

2016-11-04 Thread samarth
PHOENIX-3412 Used the batch JDBC APIs in pherf.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d7aea492
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d7aea492
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d7aea492

Branch: refs/heads/encodecolumns2
Commit: d7aea492984c72ac77be7dd1305b79e03347ea7a
Parents: 7f5d79a
Author: Josh Elser 
Authored: Thu Mar 24 21:48:30 2016 -0400
Committer: Josh Elser 
Committed: Thu Oct 27 14:35:10 2016 -0400

--
 .../java/org/apache/phoenix/pherf/Pherf.java|  6 +++
 .../phoenix/pherf/workload/WriteWorkload.java   | 49 ++--
 2 files changed, 51 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d7aea492/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
--
diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 154d6ff..43061e0 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -91,6 +91,7 @@ public class Pherf {
options.addOption("useAverageCompareType", false, "Compare 
results with Average query time instead of default is Minimum query time.");
options.addOption("t", "thin", false, "Use the Phoenix Thin 
Driver");
options.addOption("s", "server", true, "The URL for the 
Phoenix QueryServer");
+   options.addOption("b", "batchApi", false, "Use JDBC Batch 
API for writes");
 }
 
 private final String zookeeper;
@@ -166,6 +167,11 @@ public class Pherf {
 queryServerUrl = null;
 }
 
+if (command.hasOption('b')) {
+  // If the '-b' option was provided, set the system property for 
WriteWorkload to pick up.
+  System.setProperty(WriteWorkload.USE_BATCH_API_PROPERTY, 
Boolean.TRUE.toString());
+}
+
 if ((command.hasOption("h") || (args == null || args.length == 0)) && 
!command
 .hasOption("listFiles")) {
 hf.printHelp("Pherf", options);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d7aea492/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
--
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index e536eb9..69d35cc 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -52,6 +52,9 @@ import org.slf4j.LoggerFactory;
 
 public class WriteWorkload implements Workload {
 private static final Logger logger = 
LoggerFactory.getLogger(WriteWorkload.class);
+
+public static final String USE_BATCH_API_PROPERTY = 
"pherf.default.dataloader.batchApi";
+
 private final PhoenixUtil pUtil;
 private final XMLConfigParser parser;
 private final RulesApplier rulesApplier;
@@ -64,6 +67,7 @@ public class WriteWorkload implements Workload {
 private final int threadPoolSize;
 private final int batchSize;
 private final GeneratePhoenixStats generateStatistics;
+private final boolean useBatchApi;
 
 public WriteWorkload(XMLConfigParser parser) throws Exception {
 this(PhoenixUtil.create(), parser, GeneratePhoenixStats.NO);
@@ -119,6 +123,9 @@ public class WriteWorkload implements Workload {
 
 int size = 
Integer.parseInt(properties.getProperty("pherf.default.dataloader.threadpool"));
 
+// Should addBatch/executeBatch be used? Default: false
+this.useBatchApi = Boolean.getBoolean(USE_BATCH_API_PROPERTY);
+
 this.threadPoolSize = (size == 0) ? 
Runtime.getRuntime().availableProcessors() : size;
 
 // TODO Move pool management up to WorkloadExecutor
@@ -201,7 +208,7 @@ public class WriteWorkload implements Workload {
 Future
 write =
 upsertData(scenario, phxMetaCols, scenario.getTableName(), 
threadRowCount,
-dataLoadThreadTime);
+dataLoadThreadTime, this.useBatchApi);
 writeBatches.add(write);
 }
 if (writeBatches.isEmpty()) {
@@ -234,7 +241,7 @@ public class WriteWorkload implements Workload {
 
 public Future upsertData(final Scenario scenario, final List 
columns,
 final String tableName, final int rowCount,
-final DataLoadThreadTime dataLoadThreadTime) {
+fin

[41/50] [abbrv] phoenix git commit: PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily (chenglei)

2016-11-04 Thread samarth
PHOENIX-3199 ServerCacheClient sends cache to all regions unnecessarily 
(chenglei)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/87421ede
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/87421ede
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/87421ede

Branch: refs/heads/encodecolumns2
Commit: 87421ede3e9c22f9e567950c6a0acf735437f3a4
Parents: 83ed28f
Author: James Taylor 
Authored: Fri Nov 4 09:15:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 09:18:53 2016 -0700

--
 .../org/apache/phoenix/cache/ServerCacheClient.java   | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/87421ede/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 67fc410..0383251 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -37,6 +37,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -81,6 +82,7 @@ import com.google.common.collect.ImmutableSet;
  */
 public class ServerCacheClient {
 public static final int UUID_LENGTH = Bytes.SIZEOF_LONG;
+public static final byte[] KEY_IN_FIRST_REGION = new byte[]{0};
 private static final Log LOG = LogFactory.getLog(ServerCacheClient.class);
 private static final Random RANDOM = new Random();
 private final PhoenixConnection connection;
@@ -177,7 +179,7 @@ public class ServerCacheClient {
 // Call RPC once per server
 servers.add(entry);
 if (LOG.isDebugEnabled()) 
{LOG.debug(addCustomAnnotations("Adding cache entry to be sent for " + entry, 
connection));}
-final byte[] key = entry.getRegionInfo().getStartKey();
+final byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());
 final HTableInterface htable = 
services.getTable(cacheUsingTableRef.getTable().getPhysicalName().getBytes());
 closeables.add(htable);
 futures.add(executor.submit(new JobCallable() {
@@ -319,7 +321,7 @@ public class ServerCacheClient {
for (HRegionLocation entry : locations) {
if (remainingOnServers.contains(entry)) {  // Call once 
per server
try {
-   byte[] key = 
entry.getRegionInfo().getStartKey();
+byte[] key = 
getKeyInRegion(entry.getRegionInfo().getStartKey());

iterateOverTable.coprocessorService(ServerCachingService.class, key, key, 
new 
Batch.Call() {
@Override
@@ -382,4 +384,12 @@ public class ServerCacheClient {
 assert(uuid.length == Bytes.SIZEOF_LONG);
 return Long.toString(Bytes.toLong(uuid));
 }
+
+private static byte[] getKeyInRegion(byte[] regionStartKey) {
+assert (regionStartKey != null);
+if (Bytes.equals(regionStartKey, HConstants.EMPTY_START_ROW)) {
+return KEY_IN_FIRST_REGION;
+}
+return regionStartKey;
+}
 }



[45/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 15d6d2f..c5f690b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -44,6 +44,7 @@ import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.util.Closeables;
+import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
@@ -89,7 +90,7 @@ public class FormatToKeyValueReducer
 }
 
 private void initColumnsMap(PhoenixConnection conn) throws SQLException {
-Map indexMap = new TreeMap(Bytes.BYTES_COMPARATOR);
+Map indexMap = new TreeMap<>(Bytes.BYTES_COMPARATOR);
 columnIndexes = new HashMap<>();
 int columnIndex = 0;
 for (int index = 0; index < logicalNames.size(); index++) {
@@ -98,12 +99,16 @@ public class FormatToKeyValueReducer
 for (int i = 0; i < cls.size(); i++) {
 PColumn c = cls.get(i);
 byte[] family = new byte[0];
-if (c.getFamilyName() != null) {
+byte[] cq;
+if (!SchemaUtil.isPKColumn(c)) {
 family = c.getFamilyName().getBytes();
+cq = EncodedColumnsUtil.getColumnQualifier(c, table);
+} else {
+// TODO: samarth verify if this is the right thing to do 
here.
+cq = c.getName().getBytes();
 }
-byte[] name = c.getName().getBytes();
-byte[] cfn = Bytes.add(family, 
QueryConstants.NAMESPACE_SEPARATOR_BYTES, name);
-Pair pair = new Pair(family, name);
+byte[] cfn = Bytes.add(family, 
QueryConstants.NAMESPACE_SEPARATOR_BYTES, cq);
+Pair pair = new Pair<>(family, cq);
 if (!indexMap.containsKey(cfn)) {
 indexMap.put(cfn, new Integer(columnIndex));
 columnIndexes.put(new Integer(columnIndex), pair);
@@ -111,8 +116,8 @@ public class FormatToKeyValueReducer
 }
 }
 byte[] emptyColumnFamily = SchemaUtil.getEmptyColumnFamily(table);
-Pair pair = new Pair(emptyColumnFamily, 
QueryConstants
-.EMPTY_COLUMN_BYTES);
+byte[] emptyKeyValue = 
EncodedColumnsUtil.getEmptyKeyValueInfo(table).getFirst();
+Pair pair = new Pair<>(emptyColumnFamily, 
emptyKeyValue);
 columnIndexes.put(new Integer(columnIndex), pair);
 columnIndex++;
 }
@@ -123,18 +128,17 @@ public class FormatToKeyValueReducer
   Reducer.Context context)
 throws IOException, InterruptedException {
 TreeSet map = new TreeSet(KeyValue.COMPARATOR);
-ImmutableBytesWritable rowKey = key.getRowkey();
 for (ImmutableBytesWritable aggregatedArray : values) {
 DataInputStream input = new DataInputStream(new 
ByteArrayInputStream(aggregatedArray.get()));
 while (input.available() != 0) {
 byte type = input.readByte();
 int index = WritableUtils.readVInt(input);
 ImmutableBytesWritable family;
-ImmutableBytesWritable name;
+ImmutableBytesWritable cq;
 ImmutableBytesWritable value = 
QueryConstants.EMPTY_COLUMN_VALUE_BYTES_PTR;
 Pair pair = columnIndexes.get(index);
 family = new ImmutableBytesWritable(pair.getFirst());
-name = new ImmutableBytesWritable(pair.getSecond());
+cq = new ImmutableBytesWritable(pair.getSecond());
 int len = WritableUtils.readVInt(input);
 if (len > 0) {
 byte[] array = new byte[len];
@@ -145,10 +149,10 @@ public class FormatToKeyValueReducer
 KeyValue.Type kvType = KeyValue.Type.codeToType(type);
 switch (kvType) {
 case Put: // not null value
-kv = builder.buildPut(key.getRowkey(), family, name, 
value);
+kv = builder.buildPut(key.getRowkey(), family, cq, 
value);
 break;
 case DeleteColumn: // null value
-kv = builder.buildDeleteColumns(key.getRowkey(), 
family, name);
+kv = builder.buildDe

[37/50] [abbrv] phoenix git commit: Add missing Apache license

2016-11-04 Thread samarth
Add missing Apache license


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/eedb2b4d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/eedb2b4d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/eedb2b4d

Branch: refs/heads/encodecolumns2
Commit: eedb2b4d94c86b50cd0ecd249c3f17573eaf9e4a
Parents: dcebfc2
Author: Mujtaba 
Authored: Wed Nov 2 15:54:53 2016 -0700
Committer: Mujtaba 
Committed: Wed Nov 2 15:54:53 2016 -0700

--
 .../hive/query/PhoenixQueryBuilderTest.java| 17 +
 1 file changed, 17 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/eedb2b4d/phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
--
diff --git 
a/phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
 
b/phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
index 7f1a7c3..920e8cf 100644
--- 
a/phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
+++ 
b/phoenix-hive/src/test/java/org/apache/phoenix/hive/query/PhoenixQueryBuilderTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.hive.query;
 
 import com.google.common.collect.Lists;



[14/50] [abbrv] phoenix git commit: PHOENIX-3421 Column name lookups fail when on an indexed table

2016-11-04 Thread samarth
PHOENIX-3421 Column name lookups fail when on an indexed table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fc3af300
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fc3af300
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fc3af300

Branch: refs/heads/encodecolumns2
Commit: fc3af300faa4a8af4025aff5696b8b28d3193ec9
Parents: bb88e9f
Author: James Taylor 
Authored: Thu Oct 27 23:09:09 2016 -0700
Committer: James Taylor 
Committed: Thu Oct 27 23:13:07 2016 -0700

--
 .../org/apache/phoenix/util/PhoenixRuntime.java | 34 +++
 .../phoenix/compile/QueryOptimizerTest.java |  3 +-
 .../apache/phoenix/util/PhoenixRuntimeTest.java | 44 
 3 files changed, 79 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc3af300/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
index 6027b95..b2f9ffc 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
@@ -804,6 +804,33 @@ public class PhoenixRuntime {
 private static String addQuotes(String str) {
 return str == null ? str : "\"" + str + "\"";
 }
+
+/**
+* Get the column family, column name pairs that make up the row key of the 
table that will be queried.
+* @param conn - connection used to generate the query plan. Caller should 
take care of closing the connection appropriately.
+* @param plan - query plan to get info for.
+* @return the pairs of column family name and column name columns in the 
data table that make up the row key for
+* the table used in the query plan. Column family names are optional and 
hence the first part of the pair is nullable.
+* Column names and family names are enclosed in double quotes to allow for 
case sensitivity and for presence of 
+* special characters. Salting column and view index id column are not 
included. If the connection is tenant specific 
+* and the table used by the query plan is multi-tenant, then the tenant id 
column is not included as well.
+* @throws SQLException
+*/
+public static List> getPkColsForSql(Connection conn, 
QueryPlan plan) throws SQLException {
+checkNotNull(plan);
+checkNotNull(conn);
+List pkColumns = getPkColumns(plan.getTableRef().getTable(), 
conn, true);
+List> columns = 
Lists.newArrayListWithExpectedSize(pkColumns.size());
+String columnName;
+String familyName;
+for (PColumn pCol : pkColumns ) {
+columnName = addQuotes(pCol.getName().getString());
+familyName = pCol.getFamilyName() != null ? 
addQuotes(pCol.getFamilyName().getString()) : null;
+columns.add(new Pair(familyName, columnName));
+}
+return columns;
+}
+
 /**
  *
  * @param columns - Initialized empty list to be filled with the pairs of 
column family name and column name for columns that are used 
@@ -818,6 +845,7 @@ public class PhoenixRuntime {
  * names correspond to the index table.
  * @throws SQLException
  */
+@Deprecated
 public static void getPkColsForSql(List> columns, 
QueryPlan plan, Connection conn, boolean forDataTable) throws SQLException {
 checkNotNull(columns);
 checkNotNull(plan);
@@ -846,6 +874,7 @@ public class PhoenixRuntime {
  * types correspond to the index table.
  * @throws SQLException
  */
+@Deprecated
 public static void getPkColsDataTypesForSql(List> 
columns, List dataTypes, QueryPlan plan, Connection conn, boolean 
forDataTable) throws SQLException {
 checkNotNull(columns);
 checkNotNull(dataTypes);
@@ -1025,6 +1054,11 @@ public class PhoenixRuntime {
 // normalize and remove quotes from family and column names before 
looking up.
 familyName = SchemaUtil.normalizeIdentifier(familyName);
 columnName = SchemaUtil.normalizeIdentifier(columnName);
+// Column names are always for the data table, so we must translate 
them if
+// we're dealing with an index table.
+if (table.getType() == PTableType.INDEX) {
+columnName = IndexUtil.getIndexColumnName(familyName, columnName);
+}
 PColumn pColumn = null;
 if (familyName != null) {
 PColumnFamily family = table.getColumnFamily(familyName);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc3af300/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOpt

[34/50] [abbrv] phoenix git commit: PHOENIX-3435 Upgrade will fail for future releases because of use of timestamp as value for upgrade mutex

2016-11-04 Thread samarth
PHOENIX-3435 Upgrade will fail for future releases because of use of timestamp 
as value for upgrade mutex


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/83b0ebee
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/83b0ebee
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/83b0ebee

Branch: refs/heads/encodecolumns2
Commit: 83b0ebee10577129aa06d0335e7b37d7c48ddc26
Parents: c1c78b2
Author: Samarth 
Authored: Wed Nov 2 13:20:39 2016 -0700
Committer: Samarth 
Committed: Wed Nov 2 13:21:01 2016 -0700

--
 .../phoenix/query/ConnectionQueryServicesImpl.java  | 16 
 1 file changed, 4 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/83b0ebee/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index b1b7bab..356f0b8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -20,7 +20,6 @@ package org.apache.phoenix.query;
 import static java.util.concurrent.TimeUnit.MILLISECONDS;
 import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP;
-import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_9_0;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MAJOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_MINOR_VERSION;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOENIX_PATCH_NUMBER;
@@ -281,6 +280,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 private final boolean isAutoUpgradeEnabled;
 private final AtomicBoolean upgradeRequired = new AtomicBoolean(false);
 private static final byte[] UPGRADE_MUTEX = "UPGRADE_MUTEX".getBytes();
+private static final byte[] UPGRADE_MUTEX_VALUE = UPGRADE_MUTEX; 
 
 private static interface FeatureSupported {
 boolean isSupported(ConnectionQueryServices services);
@@ -2971,13 +2971,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 /**
  * Acquire distributed mutex of sorts to make sure only one JVM is able to 
run the upgrade code by
  * making use of HBase's checkAndPut api.
- * 
- * This method was added as part of 4.9.0 release. For clients upgrading 
to 4.9.0, the old value in the
- * cell will be null i.e. the {@value #UPGRADE_MUTEX} column will be 
non-existent. For client's
- * upgrading to a release newer than 4.9.0 the existing cell value will be 
non-null. The client which
- * wins the race will end up setting the cell value to the {@value 
MetaDataProtocol#MIN_SYSTEM_TABLE_TIMESTAMP}
- * for the release.
- * 
  * 
  * @return true if client won the race, false otherwise
  * @throws IOException
@@ -3003,9 +2996,8 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 try (HTableInterface sysMutexTable = 
getTable(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
 byte[] family = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_FAMILY_NAME_BYTES;
 byte[] qualifier = UPGRADE_MUTEX;
-byte[] oldValue = currentServerSideTableTimestamp < 
MIN_SYSTEM_TABLE_TIMESTAMP_4_9_0 ? null
-: PLong.INSTANCE.toBytes(currentServerSideTableTimestamp);
-byte[] newValue = 
PLong.INSTANCE.toBytes(MIN_SYSTEM_TABLE_TIMESTAMP);
+byte[] oldValue = null;
+byte[] newValue = UPGRADE_MUTEX_VALUE;
 Put put = new Put(rowToLock);
 put.add(family, qualifier, newValue);
 boolean acquired = sysMutexTable.checkAndPut(rowToLock, family, 
qualifier, oldValue, put);
@@ -3021,7 +3013,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 try (HTableInterface sysMutexTable = 
getTable(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME_BYTES)) {
 byte[] family = 
PhoenixDatabaseMetaData.SYSTEM_MUTEX_FAMILY_NAME_BYTES;
 byte[] qualifier = UPGRADE_MUTEX;
-byte[] expectedValue = 
PLong.INSTANCE.toBytes(MIN_SYSTEM_TABLE_TIMESTAMP);
+byte[] expectedValue = UPGRADE_MUTEX_VALUE;
 Delete delete = new Delete(mutexRowKey);
 RowMutations mutations = new RowMutations(mutexRowKey);
 mutations.add(delete);



[42/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/test/java/org/apache/phoenix/query/EncodedColumnQualifierCellsListTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/EncodedColumnQualifierCellsListTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/query/EncodedColumnQualifierCellsListTest.java
new file mode 100644
index 000..564e75e
--- /dev/null
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/query/EncodedColumnQualifierCellsListTest.java
@@ -0,0 +1,608 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.query;
+
+import static 
org.apache.phoenix.util.EncodedColumnsUtil.getEncodedColumnQualifier;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.ConcurrentModificationException;
+import java.util.Iterator;
+import java.util.List;
+import java.util.ListIterator;
+import java.util.NoSuchElementException;
+
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.schema.tuple.EncodedColumnQualiferCellsList;
+import org.junit.Test;
+
+public class EncodedColumnQualifierCellsListTest {
+
+private static final byte[] row = Bytes.toBytes("row");
+private static final byte[] cf = Bytes.toBytes("cf");
+
+
+@Test
+public void testIterator() {
+EncodedColumnQualiferCellsList list = new 
EncodedColumnQualiferCellsList(11, 16);
+Cell[] cells = new Cell[7];
+int i = 0;
+populateListAndArray(list, cells);
+Iterator itr = list.iterator();
+assertTrue(itr.hasNext());
+
+// test itr.next()
+i = 0;
+while (itr.hasNext()) {
+assertEquals(cells[i++], itr.next());
+}
+
+assertEquals(7, list.size());
+
+// test itr.remove()
+itr = list.iterator();
+i = 0;
+int numRemoved = 0;
+try {
+itr.remove();
+fail("Remove not allowed till next() is called");
+} catch (IllegalStateException expected) {}
+
+while (itr.hasNext()) {
+assertEquals(cells[i++], itr.next());
+itr.remove();
+numRemoved++;
+}
+assertEquals("Number of elements removed should have been the size of 
the list", 7, numRemoved);
+}
+
+@Test
+public void testSize() {
+EncodedColumnQualiferCellsList list = new 
EncodedColumnQualiferCellsList(11, 16);
+assertEquals(0, list.size());
+
+populateList(list);
+
+assertEquals(7, list.size());
+int originalSize = list.size();
+
+Iterator itr = list.iterator();
+while (itr.hasNext()) {
+itr.next();
+itr.remove();
+assertEquals(--originalSize, list.size());
+}
+}
+
+@Test
+public void testIsEmpty() throws Exception {
+EncodedColumnQualiferCellsList list = new 
EncodedColumnQualiferCellsList(11, 16);
+assertTrue(list.isEmpty());
+populateList(list);
+assertFalse(list.isEmpty());
+Iterator itr = list.iterator();
+while (itr.hasNext()) {
+itr.next();
+itr.remove();
+if (itr.hasNext()) {
+assertFalse(list.isEmpty());
+}
+}
+assertTrue(list.isEmpty());
+}
+
+@Test
+public void testContains() throws Exception {
+EncodedColumnQualiferCellsList list = new 
EncodedColumnQualiferCellsList(11, 16);
+Cell[] cells = new Cell[7];
+populateListAndArray(list, cells);
+
+for (Cell c : cells) {
+assertTrue(list.contains(c));
+}
+assertFalse(list.contains(KeyValue.createFirstOnRow(row, cf, 
getEncodedColumnQualifier(13;
+}
+
+@Test
+  

[44/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
index 064137e..515e428 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PMetaDataImpl.java
@@ -223,7 +223,7 @@ public class PMetaDataImpl implements PMetaData {
 if (familyName == null) {
 column = 
table.getPKColumn(columnToRemove.getName().getString());
 } else {
-column = 
table.getColumnFamily(familyName).getColumn(columnToRemove.getName().getString());
+column = 
table.getColumnFamily(familyName).getPColumnForColumnName(columnToRemove.getName().getString());
 }
 int positionOffset = 0;
 int position = column.getPosition();
@@ -238,7 +238,7 @@ public class PMetaDataImpl implements PMetaData {
 // Update position of columns that follow removed column
 for (int i = position+1; i < oldColumns.size(); i++) {
 PColumn oldColumn = oldColumns.get(i);
-PColumn newColumn = new PColumnImpl(oldColumn.getName(), 
oldColumn.getFamilyName(), oldColumn.getDataType(), oldColumn.getMaxLength(), 
oldColumn.getScale(), oldColumn.isNullable(), i-1+positionOffset, 
oldColumn.getSortOrder(), oldColumn.getArraySize(), 
oldColumn.getViewConstant(), oldColumn.isViewReferenced(), 
oldColumn.getExpressionStr(), oldColumn.isRowTimestamp(), 
oldColumn.isDynamic());
+PColumn newColumn = new PColumnImpl(oldColumn.getName(), 
oldColumn.getFamilyName(), oldColumn.getDataType(), oldColumn.getMaxLength(), 
oldColumn.getScale(), oldColumn.isNullable(), i-1+positionOffset, 
oldColumn.getSortOrder(), oldColumn.getArraySize(), 
oldColumn.getViewConstant(), oldColumn.isViewReferenced(), 
oldColumn.getExpressionStr(), oldColumn.isRowTimestamp(), 
oldColumn.isDynamic(), oldColumn.getEncodedColumnQualifier());
 columns.add(newColumn);
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
index 0e1337c..8df6a95 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
@@ -83,6 +83,32 @@ public interface PName {
 return 0;
 }
 };
+public static PName ENCODED_EMPTY_COLUMN_NAME = new PName() {
+@Override
+public String getString() {
+return String.valueOf(QueryConstants.ENCODED_EMPTY_COLUMN_NAME);
+}
+
+@Override
+public byte[] getBytes() {
+return QueryConstants.ENCODED_EMPTY_COLUMN_BYTES;
+}
+
+@Override
+public String toString() {
+return getString();
+}
+
+@Override
+public ImmutableBytesPtr getBytesPtr() {
+return QueryConstants.ENCODED_EMPTY_COLUMN_BYTES_PTR;
+}
+
+@Override
+public int getEstimatedSize() {
+return 0;
+}
+};
 /**
  * Get the client-side, normalized name as referenced
  * in a SQL statement.

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
index 01e8afe..d3b11b2 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
@@ -17,7 +17,15 @@
  */
 package org.apache.phoenix.schema;
 
+import static 
org.apache.phoenix.query.QueryConstants.ENCODED_CQ_COUNTER_INITIAL_VALUE;
+
+import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import javax.annotation.Nullable;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -129,7 +137,7 @@ public interface PTable extends PMetaDataEntity {
  * Link from a view to its parent table
  */
 PARENT_TABLE((byte)3);
-
+
 private final byte[] byteValue;
 private final byte serializedValue;
 
@@ -153,6 +161,35 @@ public interface PTable extends PMetaDataEntity {
   

[30/50] [abbrv] phoenix git commit: PHOENIX-3408 arithmetic/mathematical operations with Decimal columns failed in Hive with PheonixStorageHandler.

2016-11-04 Thread samarth
PHOENIX-3408 arithmetic/mathematical operations with Decimal columns failed in 
Hive with PheonixStorageHandler.

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bebcc552
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bebcc552
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bebcc552

Branch: refs/heads/encodecolumns2
Commit: bebcc552fb2bc40b8fda79dc82be7fc4c61945bc
Parents: 46d4bb4
Author: Jeongdae Kim 
Authored: Wed Oct 26 19:26:06 2016 +0900
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:58:36 2016 -0700

--
 .../PhoenixDecimalObjectInspector.java   | 19 ++-
 .../PhoenixObjectInspectorFactory.java   |  2 +-
 2 files changed, 15 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bebcc552/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
index 8afe10f..3853c18 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixDecimalObjectInspector.java
@@ -21,6 +21,9 @@ import org.apache.hadoop.hive.common.type.HiveDecimal;
 import org.apache.hadoop.hive.metastore.api.Decimal;
 import org.apache.hadoop.hive.serde2.io.HiveDecimalWritable;
 import 
org.apache.hadoop.hive.serde2.objectinspector.primitive.HiveDecimalObjectInspector;
+import org.apache.hadoop.hive.serde2.typeinfo.DecimalTypeInfo;
+import org.apache.hadoop.hive.serde2.typeinfo.HiveDecimalUtils;
+import org.apache.hadoop.hive.serde2.typeinfo.PrimitiveTypeInfo;
 import org.apache.hadoop.hive.serde2.typeinfo.TypeInfoFactory;
 
 import java.math.BigDecimal;
@@ -30,17 +33,25 @@ public class PhoenixDecimalObjectInspector extends
 implements HiveDecimalObjectInspector {
 
 public PhoenixDecimalObjectInspector() {
-super(TypeInfoFactory.decimalTypeInfo);
+this(TypeInfoFactory.decimalTypeInfo);
+}
+
+public PhoenixDecimalObjectInspector(PrimitiveTypeInfo typeInfo) {
+super(typeInfo);
 }
 
 @Override
 public Object copyObject(Object o) {
-return o == null ? null : new BigDecimal(((BigDecimal)o).byteValue());
+return o == null ? null : new BigDecimal(o.toString());
 }
 
 @Override
 public HiveDecimal getPrimitiveJavaObject(Object o) {
-return HiveDecimal.create((BigDecimal) o);
+if (o == null) {
+return null;
+}
+
+return 
HiveDecimalUtils.enforcePrecisionScale(HiveDecimal.create((BigDecimal) 
o),(DecimalTypeInfo)typeInfo);
 }
 
 @Override
@@ -56,8 +67,6 @@ public class PhoenixDecimalObjectInspector extends
 }
 
 return value;
-
-// return super.getPrimitiveWritableObject(o);
 }
 
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bebcc552/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
index 928dede..22be0fc 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/objectinspector/PhoenixObjectInspectorFactory.java
@@ -111,7 +111,7 @@ public class PhoenixObjectInspectorFactory {
 oi = new PhoenixTimestampObjectInspector();
 break;
 case DECIMAL:
-oi = new PhoenixDecimalObjectInspector();
+oi = new 
PhoenixDecimalObjectInspector((PrimitiveTypeInfo) type);
 break;
 case BINARY:
 oi = new PhoenixBinaryObjectInspector();



[22/50] [abbrv] phoenix git commit: PHOENIX-3004 Allow configuration in hbase-site to define realms other than the server's

2016-11-04 Thread samarth
PHOENIX-3004 Allow configuration in hbase-site to define realms other than the 
server's

By default, PQS is only going to allow in the realm which the principal
belongs. Need to create the ability for them to define extra realms (for
example to support MIT kerberos with AD).


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/29c2c0a3
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/29c2c0a3
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/29c2c0a3

Branch: refs/heads/encodecolumns2
Commit: 29c2c0a3033bab67e36f1a4cf7f8962427c1bceb
Parents: 4b85920
Author: Josh Elser 
Authored: Mon Oct 31 10:56:41 2016 -0400
Committer: Josh Elser 
Committed: Mon Oct 31 11:29:02 2016 -0400

--
 .../main/java/org/apache/phoenix/query/QueryServices.java   | 1 +
 .../org/apache/phoenix/queryserver/server/QueryServer.java  | 9 -
 2 files changed, 9 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/29c2c0a3/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
index 28844e1..f5ee612 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
@@ -207,6 +207,7 @@ public interface QueryServices extends SQLCloseable {
 public static final String QUERY_SERVER_UGI_CACHE_MAX_SIZE = 
"phoenix.queryserver.ugi.cache.max.size";
 public static final String QUERY_SERVER_UGI_CACHE_INITIAL_SIZE = 
"phoenix.queryserver.ugi.cache.initial.size";
 public static final String QUERY_SERVER_UGI_CACHE_CONCURRENCY = 
"phoenix.queryserver.ugi.cache.concurrency";
+public static final String QUERY_SERVER_KERBEROS_ALLOWED_REALMS = 
"phoenix.queryserver.kerberos.allowed.realms";
 
 public static final String RENEW_LEASE_ENABLED = 
"phoenix.scanner.lease.renew.enabled";
 public static final String RUN_RENEW_LEASE_FREQUENCY_INTERVAL_MILLISECONDS 
= "phoenix.scanner.lease.renew.interval";

http://git-wip-us.apache.org/repos/asf/phoenix/blob/29c2c0a3/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
--
diff --git 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
index d6b7b93..8c44938 100644
--- 
a/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
+++ 
b/phoenix-queryserver/src/main/java/org/apache/phoenix/queryserver/server/QueryServer.java
@@ -38,6 +38,7 @@ import org.apache.hadoop.net.DNS;
 import org.apache.hadoop.security.SecurityUtil;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authorize.ProxyUsers;
+import org.apache.hadoop.util.StringUtils;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.apache.phoenix.query.QueryServices;
@@ -214,8 +215,14 @@ public final class QueryServer extends Configured 
implements Tool, Runnable {
 String keytabPath = 
getConf().get(QueryServices.QUERY_SERVER_KEYTAB_FILENAME_ATTRIB);
 File keytab = new File(keytabPath);
 
+String realmsString = 
getConf().get(QueryServices.QUERY_SERVER_KERBEROS_ALLOWED_REALMS, null);
+String[] additionalAllowedRealms = null;
+if (null != realmsString) {
+additionalAllowedRealms = StringUtils.split(realmsString, ',');
+}
+
 // Enable SPNEGO and impersonation (through standard Hadoop 
configuration means)
-builder.withSpnego(ugi.getUserName())
+builder.withSpnego(ugi.getUserName(), additionalAllowedRealms)
 .withAutomaticLogin(keytab)
 .withImpersonation(new PhoenixDoAsCallback(ugi, getConf()));
   }



[47/50] [abbrv] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column n

2016-11-04 Thread samarth
http://git-wip-us.apache.org/repos/asf/phoenix/blob/ede568e9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
index b8b8b2f..2f0c00b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
@@ -269,6 +269,16 @@ public final class PTableProtos {
  * optional bool isDynamic = 14;
  */
 boolean getIsDynamic();
+
+// optional int32 columnQualifier = 15;
+/**
+ * optional int32 columnQualifier = 15;
+ */
+boolean hasColumnQualifier();
+/**
+ * optional int32 columnQualifier = 15;
+ */
+int getColumnQualifier();
   }
   /**
* Protobuf type {@code PColumn}
@@ -391,6 +401,11 @@ public final class PTableProtos {
   isDynamic_ = input.readBool();
   break;
 }
+case 120: {
+  bitField0_ |= 0x4000;
+  columnQualifier_ = input.readInt32();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -709,6 +724,22 @@ public final class PTableProtos {
   return isDynamic_;
 }
 
+// optional int32 columnQualifier = 15;
+public static final int COLUMNQUALIFIER_FIELD_NUMBER = 15;
+private int columnQualifier_;
+/**
+ * optional int32 columnQualifier = 15;
+ */
+public boolean hasColumnQualifier() {
+  return ((bitField0_ & 0x4000) == 0x4000);
+}
+/**
+ * optional int32 columnQualifier = 15;
+ */
+public int getColumnQualifier() {
+  return columnQualifier_;
+}
+
 private void initFields() {
   columnNameBytes_ = com.google.protobuf.ByteString.EMPTY;
   familyNameBytes_ = com.google.protobuf.ByteString.EMPTY;
@@ -724,6 +755,7 @@ public final class PTableProtos {
   expression_ = "";
   isRowTimestamp_ = false;
   isDynamic_ = false;
+  columnQualifier_ = 0;
 }
 private byte memoizedIsInitialized = -1;
 public final boolean isInitialized() {
@@ -799,6 +831,9 @@ public final class PTableProtos {
   if (((bitField0_ & 0x2000) == 0x2000)) {
 output.writeBool(14, isDynamic_);
   }
+  if (((bitField0_ & 0x4000) == 0x4000)) {
+output.writeInt32(15, columnQualifier_);
+  }
   getUnknownFields().writeTo(output);
 }
 
@@ -864,6 +899,10 @@ public final class PTableProtos {
 size += com.google.protobuf.CodedOutputStream
   .computeBoolSize(14, isDynamic_);
   }
+  if (((bitField0_ & 0x4000) == 0x4000)) {
+size += com.google.protobuf.CodedOutputStream
+  .computeInt32Size(15, columnQualifier_);
+  }
   size += getUnknownFields().getSerializedSize();
   memoizedSerializedSize = size;
   return size;
@@ -957,6 +996,11 @@ public final class PTableProtos {
 result = result && (getIsDynamic()
 == other.getIsDynamic());
   }
+  result = result && (hasColumnQualifier() == other.hasColumnQualifier());
+  if (hasColumnQualifier()) {
+result = result && (getColumnQualifier()
+== other.getColumnQualifier());
+  }
   result = result &&
   getUnknownFields().equals(other.getUnknownFields());
   return result;
@@ -1026,6 +1070,10 @@ public final class PTableProtos {
 hash = (37 * hash) + ISDYNAMIC_FIELD_NUMBER;
 hash = (53 * hash) + hashBoolean(getIsDynamic());
   }
+  if (hasColumnQualifier()) {
+hash = (37 * hash) + COLUMNQUALIFIER_FIELD_NUMBER;
+hash = (53 * hash) + getColumnQualifier();
+  }
   hash = (29 * hash) + getUnknownFields().hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -1163,6 +1211,8 @@ public final class PTableProtos {
 bitField0_ = (bitField0_ & ~0x1000);
 isDynamic_ = false;
 bitField0_ = (bitField0_ & ~0x2000);
+columnQualifier_ = 0;
+bitField0_ = (bitField0_ & ~0x4000);
 return this;
   }
 
@@ -1247,6 +1297,10 @@ public final class PTableProtos {
   to_bitField0_ |= 0x2000;
 }
 result.isDynamic_ = isDynamic_;
+if (((from_bitField0_ & 0x4000) == 0x4000)) {
+  to_bitField0_ |= 0x4000;
+}
+result.columnQualifier_ = columnQualifier_;
 result.bitField0_ = to_bitField0_;
 onBuilt();
 return result;
@@ -1309,6 +1363,9 @@ public final class PTableProtos {
 if (other.hasIsDynamic()) {
   setIsDynamic(other.getIsDynamic());
 }
+if (other.hasColumnQualifier()

[31/50] [abbrv] phoenix git commit: PHOENIX-3386 PhoenixStorageHandler throws NPE if local tasks executed via child

2016-11-04 Thread samarth
PHOENIX-3386 PhoenixStorageHandler throws NPE if local tasks executed via child

Signed-off-by: Sergey Soldatov 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cf70820b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cf70820b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cf70820b

Branch: refs/heads/encodecolumns2
Commit: cf70820b9dee6968ac26c66c5c98079158a48ac1
Parents: bebcc55
Author: Sergey Soldatov 
Authored: Mon Oct 24 22:11:52 2016 -0700
Committer: Sergey Soldatov 
Committed: Wed Nov 2 12:58:40 2016 -0700

--
 .../apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java | 2 ++
 .../java/org/apache/phoenix/hive/PhoenixStorageHandler.java | 4 
 .../org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java | 5 ++---
 3 files changed, 8 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cf70820b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
index b1879d1..2264acd 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
@@ -55,6 +55,8 @@ import com.google.common.collect.Lists;
 public final class PhoenixConfigurationUtil {
 
 private static final Log LOG = LogFactory.getLog(PhoenixInputFormat.class);
+
+public static final String SESSION_ID = "phoenix.sessionid";
 
 public static final String UPSERT_STATEMENT = "phoenix.upsert.stmt";
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cf70820b/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
index 2bc8ace..bda2282 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/PhoenixStorageHandler.java
@@ -31,6 +31,7 @@ import 
org.apache.hadoop.hive.ql.metadata.HiveStoragePredicateHandler;
 import org.apache.hadoop.hive.ql.metadata.InputEstimator;
 import org.apache.hadoop.hive.ql.plan.ExprNodeDesc;
 import org.apache.hadoop.hive.ql.plan.TableDesc;
+import org.apache.hadoop.hive.ql.session.SessionState;
 import org.apache.hadoop.hive.serde2.Deserializer;
 import org.apache.hadoop.hive.serde2.SerDe;
 import org.apache.hadoop.mapred.InputFormat;
@@ -142,7 +143,10 @@ public class PhoenixStorageHandler extends 
DefaultStorageHandler implements
 
tableProperties.setProperty(PhoenixStorageHandlerConstants.PHOENIX_TABLE_NAME,
 tableName);
 }
+SessionState sessionState = SessionState.get();
 
+String sessionId = sessionState.getSessionId();
+jobProperties.put(PhoenixConfigurationUtil.SESSION_ID, sessionId);
 jobProperties.put(PhoenixConfigurationUtil.INPUT_TABLE_NAME, 
tableName);
 jobProperties.put(PhoenixStorageHandlerConstants.ZOOKEEPER_QUORUM, 
tableProperties
 .getProperty(PhoenixStorageHandlerConstants.ZOOKEEPER_QUORUM,

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cf70820b/phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java
--
diff --git 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java
 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java
index eb5fd24..1313fdb 100644
--- 
a/phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java
+++ 
b/phoenix-hive/src/main/java/org/apache/phoenix/hive/util/PhoenixStorageHandlerUtil.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.mapred.JobConf;
 import org.apache.hadoop.net.DNS;
 import org.apache.phoenix.hive.constants.PhoenixStorageHandlerConstants;
 import org.apache.phoenix.hive.ql.index.IndexSearchCondition;
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 
 import javax.naming.NamingException;
 import java.io.ByteArrayInputStream;
@@ -182,10 +183,8 @@ public class PhoenixStorageHandlerUtil {
 }
 
 public static String getTableKeyOfSession(JobConf jobConf, String 
tableName) {
-SessionState sessionState = SessionState.get();
-
-String sessionId = sessionS

[08/50] [abbrv] phoenix git commit: PHOENIX-3420 Upgrade to sqlline 1.2.0

2016-11-04 Thread samarth
PHOENIX-3420 Upgrade to sqlline 1.2.0


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/613a5b79
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/613a5b79
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/613a5b79

Branch: refs/heads/encodecolumns2
Commit: 613a5b79349740e2f64b0e9ffe2629c84f12eb4a
Parents: 70979ab
Author: James Taylor 
Authored: Thu Oct 27 13:08:32 2016 -0700
Committer: James Taylor 
Committed: Thu Oct 27 14:00:40 2016 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/613a5b79/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 81f239a..f7db2d7 100644
--- a/pom.xml
+++ b/pom.xml
@@ -82,7 +82,7 @@
 2.5
 1.2
 1.0
-1.1.9
+1.2.0
 13.0.1
 1.4.0
 1.3.9-1



[26/50] [abbrv] phoenix git commit: PHOENIX-3433 Local or view indexes cannot be created after PHOENIX-3254 if namespaces enabled(Rajeshbabu)

2016-11-04 Thread samarth
PHOENIX-3433 Local or view indexes cannot be created after PHOENIX-3254 if 
namespaces enabled(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c5fed780
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c5fed780
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c5fed780

Branch: refs/heads/encodecolumns2
Commit: c5fed780e80b10d70676e5b57b9e32d864fc0cb2
Parents: d45feae
Author: Rajeshbabu Chintaguntla 
Authored: Wed Nov 2 22:11:05 2016 +0530
Committer: Rajeshbabu Chintaguntla 
Committed: Wed Nov 2 22:11:05 2016 +0530

--
 .../phoenix/end2end/index/BaseLocalIndexIT.java   | 14 ++
 .../phoenix/coprocessor/MetaDataEndpointImpl.java |  7 ++-
 2 files changed, 16 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c5fed780/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseLocalIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseLocalIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseLocalIndexIT.java
index 5c8670d..e818665 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseLocalIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseLocalIndexIT.java
@@ -22,18 +22,23 @@ import java.sql.DriverManager;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Map;
 import java.util.Properties;
 
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Before;
+import org.junit.BeforeClass;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.collect.Maps;
+
 @RunWith(Parameterized.class)
 public abstract class BaseLocalIndexIT extends ParallelStatsDisabledIT {
 protected boolean isNamespaceMapped;
@@ -48,6 +53,15 @@ public abstract class BaseLocalIndexIT extends 
ParallelStatsDisabledIT {
 schemaName = BaseTest.generateUniqueName();
 }
 
+@BeforeClass
+public static void doSetup() throws Exception {
+Map serverProps = Maps.newHashMapWithExpectedSize(7);
+serverProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "true");
+Map clientProps = Maps.newHashMapWithExpectedSize(1);
+clientProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "true");
+setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
+}
+
 protected Connection getConnection() throws SQLException{
 Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.toString(isNamespaceMapped));

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c5fed780/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 1c41d54..9a7b9e3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -1419,8 +1419,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 if (parentTable!=null && 
parentTable.getAutoPartitionSeqName()!=null) {
 long autoPartitionNum = 1;
 final Properties props = new Properties();
-UpgradeUtil.doNotUpgradeOnFirstConnection(props);
-try (PhoenixConnection connection = 
DriverManager.getConnection(MetaDataUtil.getJdbcUrl(env), 
props).unwrap(PhoenixConnection.class);
+try (PhoenixConnection connection = 
QueryUtil.getConnectionOnServer(env.getConfiguration()).unwrap(PhoenixConnection.class);
 Statement stmt = connection.createStatement()) {
 String seqName = parentTable.getAutoPartitionSeqName();
 // Not going through the standard route of using 
statement.execute() as that code path
@@ -1487,9 +1486,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso

[38/50] [abbrv] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-04 Thread samarth
PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d737ed3a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d737ed3a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d737ed3a

Branch: refs/heads/encodecolumns2
Commit: d737ed3a3a3c1272af5a488b72228bcb7c1233f4
Parents: 5909249
Author: James Taylor 
Authored: Thu Nov 3 16:45:22 2016 -0700
Committer: James Taylor 
Committed: Thu Nov 3 16:50:07 2016 -0700

--
 .../main/java/org/apache/phoenix/compile/ScanRanges.java  | 10 +-
 .../org/apache/phoenix/compile/QueryOptimizerTest.java| 10 ++
 2 files changed, 19 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d737ed3a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 95eee60..19a4692 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -567,9 +567,17 @@ public class ScanRanges {
 }
 
 public int getBoundPkColumnCount() {
-return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : getBoundPkSpan(ranges, slotSpan);
+return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
+public int getBoundMinMaxSlotCount() {
+if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
+return 0;
+}
+// The minMaxRange is always a single key
+return 1 + slotSpan[0];
+}
+
 public int getBoundSlotCount() {
 int count = 0;
 boolean hasUnbound = false;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d737ed3a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
index b3a845c..e81d68a 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/QueryOptimizerTest.java
@@ -637,6 +637,16 @@ public class QueryOptimizerTest extends 
BaseConnectionlessQueryTest {
 assertEquals("IDX", 
plan.getTableRef().getTable().getTableName().getString());
 }
 
+@Test
+public void testTableUsedWithQueryMore() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+conn.createStatement().execute("CREATE TABLE t (k1 CHAR(3) NOT NULL, 
k2 CHAR(15) NOT NULL, k3 DATE NOT NULL, k4 CHAR(15) NOT NULL, CONSTRAINT pk 
PRIMARY KEY (k1,k2,k3,k4))");
+conn.createStatement().execute("CREATE INDEX idx ON t(k1,k3,k2,k4)");
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+QueryPlan plan = stmt.optimizeQuery("SELECT * FROM t WHERE 
(k1,k2,k3,k4) > ('001','001xx03DHml',to_date('2015-10-21 
09:50:55.0'),'017xx022FuI')");
+assertEquals("T", 
plan.getTableRef().getTable().getTableName().getString());
+}
+
 private void assertPlanDetails(PreparedStatement stmt, String 
expectedPkCols, String expectedPkColsDataTypes, boolean expectedHasOrderBy, int 
expectedLimit) throws SQLException {
 Connection conn = stmt.getConnection();
 QueryPlan plan = PhoenixRuntime.getOptimizedQueryPlan(stmt);



[39/50] [abbrv] phoenix git commit: PHOENIX-3421 Column name lookups fail when on an indexed table

2016-11-04 Thread samarth
PHOENIX-3421 Column name lookups fail when on an indexed table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/59092491
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/59092491
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/59092491

Branch: refs/heads/encodecolumns2
Commit: 5909249136494fc273491536b2dbf72dd9687863
Parents: eedb2b4
Author: James Taylor 
Authored: Thu Nov 3 16:21:28 2016 -0700
Committer: James Taylor 
Committed: Thu Nov 3 16:50:07 2016 -0700

--
 .../org/apache/phoenix/end2end/QueryMoreIT.java |   4 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java | 132 ++-
 .../phoenix/util/PhoenixEncodeDecodeTest.java   |   4 +-
 .../apache/phoenix/util/PhoenixRuntimeTest.java |   8 +-
 4 files changed, 137 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/59092491/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
index b9162de..2b27f00 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
@@ -275,7 +275,7 @@ public class QueryMoreIT extends ParallelStatsDisabledIT {
 values[i] = rs.getObject(i + 1);
 }
 conn = getTenantSpecificConnection(tenantId);
-pkIds.add(Base64.encodeBytes(PhoenixRuntime.encodeValues(conn, 
tableOrViewName.toUpperCase(), values, columns)));
+
pkIds.add(Base64.encodeBytes(PhoenixRuntime.encodeColumnValues(conn, 
tableOrViewName.toUpperCase(), values, columns)));
 }
 return pkIds.toArray(new String[pkIds.size()]);
 }
@@ -293,7 +293,7 @@ public class QueryMoreIT extends ParallelStatsDisabledIT {
 PreparedStatement stmt = conn.prepareStatement(query);
 int bindCounter = 1;
 for (int i = 0; i < cursorIds.length; i++) {
-Object[] pkParts = PhoenixRuntime.decodeValues(conn, 
tableName.toUpperCase(), Base64.decode(cursorIds[i]), columns);
+Object[] pkParts = PhoenixRuntime.decodeColumnValues(conn, 
tableName.toUpperCase(), Base64.decode(cursorIds[i]), columns);
 for (int j = 0; j < pkParts.length; j++) {
 stmt.setObject(bindCounter++, pkParts[j]);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/59092491/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
index b2f9ffc..0c74b84 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
@@ -819,7 +819,7 @@ public class PhoenixRuntime {
 public static List> getPkColsForSql(Connection conn, 
QueryPlan plan) throws SQLException {
 checkNotNull(plan);
 checkNotNull(conn);
-List pkColumns = getPkColumns(plan.getTableRef().getTable(), 
conn, true);
+List pkColumns = getPkColumns(plan.getTableRef().getTable(), 
conn);
 List> columns = 
Lists.newArrayListWithExpectedSize(pkColumns.size());
 String columnName;
 String familyName;
@@ -924,6 +924,7 @@ public class PhoenixRuntime {
 return sqlTypeName;
 }
 
+@Deprecated
 private static List getPkColumns(PTable ptable, Connection conn, 
boolean forDataTable) throws SQLException {
 PhoenixConnection pConn = conn.unwrap(PhoenixConnection.class);
 List pkColumns = ptable.getPKColumns();
@@ -946,6 +947,28 @@ public class PhoenixRuntime {
 return pkColumns;
 }
 
+private static List getPkColumns(PTable ptable, Connection conn) 
throws SQLException {
+PhoenixConnection pConn = conn.unwrap(PhoenixConnection.class);
+List pkColumns = ptable.getPKColumns();
+
+// Skip the salting column and the view index id column if present.
+// Skip the tenant id column too if the connection is tenant specific 
and the table used by the query plan is multi-tenant
+int offset = (ptable.getBucketNum() == null ? 0 : 1) + 
(ptable.isMultiTenant() && pConn.getTenantId() != null ? 1 : 0) + 
(ptable.getViewIndexId() == null ? 0 : 1);
+
+// get a sublist of pkColumns by skipping the offset columns.
+pkColumns = pkColumns.subList(offset, pkColumns.size());
+
+if (ptable.getType() == PT

[03/50] [abbrv] phoenix git commit: PHOENIX-3370 VIEW derived from another VIEW with WHERE on a TABLE doesn't use parent VIEW indexes

2016-11-04 Thread samarth
PHOENIX-3370 VIEW derived from another VIEW with WHERE on a TABLE doesn't use 
parent VIEW indexes


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9ebd0921
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9ebd0921
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9ebd0921

Branch: refs/heads/encodecolumns2
Commit: 9ebd0921952501618f3e87fc02fba09ba779d1ef
Parents: 0dc0d79
Author: James Taylor 
Authored: Tue Oct 25 21:25:03 2016 -0700
Committer: James Taylor 
Committed: Tue Oct 25 21:26:49 2016 -0700

--
 .../phoenix/end2end/index/ViewIndexIT.java  | 39 +---
 1 file changed, 34 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9ebd0921/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
index 99c8d2b..46aefff 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ViewIndexIT.java
@@ -23,6 +23,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -44,6 +45,7 @@ import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.schema.TableNotFoundException;
 import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -296,11 +298,10 @@ public class ViewIndexIT extends ParallelStatsDisabledIT {
 assertEquals(expectedCount, rs.getInt(1));
 // Ensure that index is being used
 rs = stmt.executeQuery("EXPLAIN SELECT COUNT(*) FROM " + 
fullTableName);
-// Uses index and finds correct number of rows
-assertEquals("CLIENT PARALLEL 1-WAY RANGE SCAN OVER " + 
Bytes.toString(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(fullBaseName)))
 + " [-32768,'123451234512345']\n" + 
-"SERVER FILTER BY FIRST KEY ONLY\n" + 
-"SERVER AGGREGATE INTO SINGLE ROW",
-QueryUtil.getExplainPlan(rs));
+if (fullBaseName != null) {
+// Uses index and finds correct number of rows
+assertTrue(QueryUtil.getExplainPlan(rs).startsWith("CLIENT 
PARALLEL 1-WAY RANGE SCAN OVER " + 
Bytes.toString(MetaDataUtil.getViewIndexPhysicalName(Bytes.toBytes(fullBaseName);
 
+}
 
 // Force it not to use index and still finds correct number of rows
 rs = stmt.executeQuery("SELECT /*+ NO_INDEX */ * FROM " + 
fullTableName);
@@ -369,13 +370,41 @@ public class ViewIndexIT extends ParallelStatsDisabledIT {
 tsConn.commit();
 assertRowCount(tsConn, tsViewFullName, baseFullName, 8);
 
+// Use different connection for delete
 Connection tsConn2 = DriverManager.getConnection(getUrl(), 
tsProps);
 tsConn2.createStatement().execute("DELETE FROM " + tsViewFullName 
+ " WHERE DOUBLE1 > 7.5 AND DOUBLE1 < 9.5");
 tsConn2.commit();
 assertRowCount(tsConn2, tsViewFullName, baseFullName, 6);
 
+tsConn2.createStatement().execute("DROP VIEW " + tsViewFullName);
+// Should drop view and index and remove index data
+conn.createStatement().execute("DROP VIEW " + viewFullName);
+// Deletes table data (but wouldn't update index)
+conn.setAutoCommit(true);
+conn.createStatement().execute("DELETE FROM " + baseFullName);
+Connection tsConn3 = DriverManager.getConnection(getUrl(), 
tsProps);
+try {
+tsConn3.createStatement().execute("SELECT * FROM " + 
tsViewFullName + " LIMIT 1");
+fail("Expected table not to be found");
+} catch (TableNotFoundException e) {
+
+}
+conn.createStatement().execute(
+"CREATE VIEW " + viewFullName + " (\n" + 
+"INT1 BIGINT NOT NULL,\n" + 
+"DOUBLE1 DECIMAL(12, 3),\n" +
+"IS_BOOLEAN BOOLEAN,\n" + 
+"TEXT1 VARCHAR,\n" + "CONSTRAINT PKVIEW PRIMARY 
KEY\n" + "(\n" +
+  

Build failed in Jenkins: Phoenix-encode-columns #19

2016-11-04 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-1 (Ubuntu ubuntu1 yahoo-not-h2 ubuntu docker) in 
workspace 
Cloning the remote Git repository
Cloning repository https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git init  # 
 > timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git # timeout=10
Fetching upstream changes from 
https://git-wip-us.apache.org/repos/asf/phoenix.git
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/phoenix.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse origin/encodecolumns2^{commit} # timeout=10
Checking out Revision ede568e9c4e4d35e7f4afe19637c8dd7cf5af23c 
(origin/encodecolumns2)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f ede568e9c4e4d35e7f4afe19637c8dd7cf5af23c
 > git rev-list 8c31c93ab4808473e91880747ce2295f296006f3 # timeout=10
First time build. Skipping changelog.
No emails were triggered.
[EnvInject] - Executing scripts and injecting environment variables after the 
SCM step.
[EnvInject] - Injecting as environment variables the properties content 
MAVEN_OPTS=-Xmx3G

[EnvInject] - Variables injected successfully.
[Phoenix-encode-columns] $ /bin/bash -xe /tmp/hudson374552990880096.sh
+ echo 'DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802'
DELETING ~/.m2/repository/org/apache/htrace. See 
https://issues.apache.org/jira/browse/PHOENIX-1802
+ echo 'CURRENT CONTENT:'
CURRENT CONTENT:
+ ls /home/jenkins/.m2/repository/org/apache/htrace
htrace
htrace-core
[Phoenix-encode-columns] $ /home/jenkins/tools/maven/latest3/bin/mvn -U clean 
install -Dcheckstyle.skip=true
[INFO] Scanning for projects...
[INFO] 
[INFO] Reactor Build Order:
[INFO] 
[INFO] Apache Phoenix
[INFO] Phoenix Core
[INFO] Phoenix - Flume
[INFO] Phoenix - Pig
[INFO] Phoenix Query Server Client
[INFO] Phoenix Query Server
[INFO] Phoenix - Pherf
[INFO] Phoenix - Spark
[INFO] Phoenix - Hive
[INFO] Phoenix Client
[INFO] Phoenix Server
[INFO] Phoenix Assembly
[INFO] Phoenix - Tracing Web Application
[INFO] 
[INFO] 
[INFO] Building Apache Phoenix 4.9.0-HBase-0.98
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix ---
[INFO] 
[INFO] --- maven-remote-resources-plugin:1.5:process (default) @ phoenix ---
[INFO] 
[INFO] --- maven-source-plugin:2.2.1:jar-no-fork (attach-sources) @ phoenix ---
[INFO] 
[INFO] --- maven-jar-plugin:2.4:test-jar (default) @ phoenix ---
[WARNING] JAR will be empty - no content was marked for inclusion!
[INFO] Building jar: 

[INFO] 
[INFO] --- maven-site-plugin:3.2:attach-descriptor (attach-descriptor) @ 
phoenix ---
[INFO] 
[INFO] --- maven-install-plugin:2.5.2:install (default-install) @ phoenix ---
[INFO] Installing 
 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix/4.9.0-HBase-0.98/phoenix-4.9.0-HBase-0.98.pom
[INFO] Installing 

 to 
/home/jenkins/.m2/repository/org/apache/phoenix/phoenix/4.9.0-HBase-0.98/phoenix-4.9.0-HBase-0.98-tests.jar
[INFO] 
[INFO] 
[INFO] Building Phoenix Core 4.9.0-HBase-0.98
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ phoenix-core ---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.13:check (validate) @ phoenix-core ---
[INFO] 
[INFO] --- build-helper-maven-plugin:1.9.1:add-test-source (add-test-source) @ 
phoenix-core ---
[INFO] Test Source director

[2/2] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-04 Thread jamestaylor
PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/77ab7dfa
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/77ab7dfa
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/77ab7dfa

Branch: refs/heads/master
Commit: 77ab7dfa233d57cfc4e5dcff33d2e4d0dad7959c
Parents: b157c48
Author: James Taylor 
Authored: Fri Nov 4 19:39:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 19:39:19 2016 -0700

--
 .../java/org/apache/phoenix/end2end/IndexExtendedIT.java | 11 ++-
 .../phoenix/end2end/index/ReadOnlyIndexFailureIT.java|  5 +
 2 files changed, 3 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/77ab7dfa/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index b723b01..b79e557 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -54,8 +54,8 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
@@ -67,6 +67,7 @@ import com.google.common.collect.Maps;
  * Tests for the {@link IndexTool}
  */
 @RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
 public class IndexExtendedIT extends BaseTest {
 private final boolean localIndex;
 private final boolean transactional;
@@ -129,9 +130,6 @@ public class IndexExtendedIT extends BaseTest {
 if (!mutable || transactional) {
 return;
 }
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -204,9 +202,6 @@ public class IndexExtendedIT extends BaseTest {
 
 @Test
 public void testSecondaryIndex() throws Exception {
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -409,7 +404,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionSplit() throws Exception {
 // This test just needs be run once
@@ -512,7 +506,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionsMerge() throws Exception {
 // This test just needs be run once

http://git-wip-us.apache.org/repos/asf/phoenix/blob/77ab7dfa/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
index a2213ea..cf3cb29 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
@@ -29,7 +29,6 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.Map;
 import java.util.Properties;
 import java.util.Set;
@@ -113,11 +112,9 @@ public class ReadOnlyIndexFailureIT extends 
BaseOwnClusterIT {
 serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
 serverProps.put("hbase.coprocessor.abortonerror", "false");
 serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
-Map clientProps = 
-Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
"true");
 NUM_SLAVES_BASE = 4;
 setU

[1/2] phoenix git commit: PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower case column names

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master e4e1570b8 -> 77ab7dfa2


PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower 
case column names


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b157c485
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b157c485
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b157c485

Branch: refs/heads/master
Commit: b157c485d5fc821b38319eb3f497063c1e9f0ffa
Parents: e4e1570
Author: James Taylor 
Authored: Fri Nov 4 18:47:03 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 19:15:54 2016 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 37 
 .../phoenix/index/PhoenixIndexBuilder.java  | 13 +--
 2 files changed, 48 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b157c485/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 9a81026..d3cb0af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -519,5 +519,42 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 
+@Test
+public void testDeleteOnSingleLowerCaseVarcharColumn() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl = " create table " + tableName + "(pk varchar primary key, 
\"counter1\" varchar, \"counter2\" smallint)";
+conn.createStatement().execute(ddl);
+String dml = "UPSERT INTO " + tableName + " VALUES('a','b') ON 
DUPLICATE KEY UPDATE \"counter1\" = null";
+conn.createStatement().execute(dml);
+conn.createStatement().execute(dml);
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals(null,rs.getString(2));
+assertFalse(rs.next());
+
+dml = "UPSERT INTO " + tableName + " VALUES('a','b',0)";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = null, \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = 'c', \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+
+rs = conn.createStatement().executeQuery("SELECT * FROM " + tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals("c",rs.getString(2));
+assertEquals(2,rs.getInt(3));
+assertFalse(rs.next());
+
+conn.close();
+}
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/b157c485/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
index ac1e2e4..ae0a19f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
@@ -31,6 +31,7 @@ import java.util.Map;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
@@ -192,6 +193,7 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 get.setFilter(new FirstKeyOnlyFilter());
 }
 MultiKeyValueTuple tuple;
+List flattenedCells = null;
 Listcells = ((HRegion)this.env.getRegion()).get(get, false);
 if (cells.isEmpty()) {
 if (skipFirstOp) {
@@ -201,7 +203,8 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 repeat--; // Skip first operation (if first wasn't ON 
DUPLICATE KEY IGNORE)
 }
 // Base c

[3/3] phoenix git commit: PHOENIX-3455 MutableIndexFailureIT is hanging on 4.x-HBase-1.1 branch

2016-11-04 Thread jamestaylor
PHOENIX-3455 MutableIndexFailureIT is hanging on 4.x-HBase-1.1 branch


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/adfae125
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/adfae125
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/adfae125

Branch: refs/heads/4.x-HBase-1.1
Commit: adfae125b9835c1f37befe22393f88e9421a7c39
Parents: 9f57567
Author: James Taylor 
Authored: Fri Nov 4 21:48:08 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 21:48:08 2016 -0700

--
 .../end2end/index/MutableIndexFailureIT.java| 63 +++-
 1 file changed, 23 insertions(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/adfae125/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index 4263890..5ec9c24 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -43,9 +43,8 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.end2end.BaseOwnClusterIT;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
-import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PIndexState;
@@ -56,6 +55,7 @@ import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 import org.apache.phoenix.util.TestUtil;
+import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -74,29 +74,34 @@ import com.google.common.collect.Maps;
 
 @Category(NeedsOwnMiniClusterTest.class)
 @RunWith(Parameterized.class)
-public class MutableIndexFailureIT extends BaseOwnClusterIT {
+public class MutableIndexFailureIT extends BaseTest {
 public static volatile boolean FAIL_WRITE = false;
 public static final String INDEX_NAME = "IDX";
 
 private String tableName;
 private String indexName;
-public static volatile String fullTableName;
+private String fullTableName;
 private String fullIndexName;
 
 private final boolean transactional;
 private final boolean localIndex;
 private final String tableDDLOptions;
 private final boolean isNamespaceMapped;
-private String schema = "TEST";
+private String schema = generateUniqueName();
+
+@AfterClass
+public static void doTeardown() throws Exception {
+tearDownMiniCluster();
+}
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped) {
 this.transactional = transactional;
 this.localIndex = localIndex;
-this.tableDDLOptions = " SALT_BUCKETS=2 " + (transactional ? ", 
TRANSACTIONAL=true " : "");
+this.tableDDLOptions = transactional ? " TRANSACTIONAL=true " : "";
 this.tableName = (localIndex ? "L_" : "") + 
TestUtil.DEFAULT_DATA_TABLE_NAME + (transactional ? "_TXN" : "")
 + (isNamespaceMapped ? "_NM" : "");
 this.indexName = INDEX_NAME;
-fullTableName = SchemaUtil.getTableName(schema, tableName);
+this.fullTableName = SchemaUtil.getTableName(schema, tableName);
 this.fullIndexName = SchemaUtil.getTableName(schema, indexName);
 this.isNamespaceMapped = isNamespaceMapped;
 }
@@ -151,11 +156,6 @@ public class MutableIndexFailureIT extends 
BaseOwnClusterIT {
 FAIL_WRITE = false;
 conn.createStatement().execute(
 "CREATE " + (localIndex ? "LOCAL " : "") + "INDEX " + 
indexName + " ON " + fullTableName + " (v1) INCLUDE (v2)");
-// Create other index which should be local/global if the other 
index is global/local to
-// check the drop index.
-conn.createStatement().execute(
-"CREATE " + (!localIndex ? "LOCAL " : "") + "INDEX " + 
indexName + "_3" + " ON "
-+ fullTableName + " (v2) INCLUDE (v1)");
 conn.createStatement().execute(
 "CREATE " + (localIndex ? "LOCAL " : "") + "INDEX " + 
secondIndexName + " ON " + second

[1/3] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 3de5f8027 -> adfae125b


PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9f57567d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9f57567d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9f57567d

Branch: refs/heads/4.x-HBase-1.1
Commit: 9f57567dc8026a50130f2fb9121f6814474ff0f5
Parents: 9795683
Author: James Taylor 
Authored: Fri Nov 4 19:39:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 19:40:45 2016 -0700

--
 .../java/org/apache/phoenix/end2end/IndexExtendedIT.java | 11 ++-
 .../phoenix/end2end/index/ReadOnlyIndexFailureIT.java|  5 +
 2 files changed, 3 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f57567d/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index b723b01..b79e557 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -54,8 +54,8 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
@@ -67,6 +67,7 @@ import com.google.common.collect.Maps;
  * Tests for the {@link IndexTool}
  */
 @RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
 public class IndexExtendedIT extends BaseTest {
 private final boolean localIndex;
 private final boolean transactional;
@@ -129,9 +130,6 @@ public class IndexExtendedIT extends BaseTest {
 if (!mutable || transactional) {
 return;
 }
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -204,9 +202,6 @@ public class IndexExtendedIT extends BaseTest {
 
 @Test
 public void testSecondaryIndex() throws Exception {
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -409,7 +404,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionSplit() throws Exception {
 // This test just needs be run once
@@ -512,7 +506,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionsMerge() throws Exception {
 // This test just needs be run once

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f57567d/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
index a2213ea..cf3cb29 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
@@ -29,7 +29,6 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.Map;
 import java.util.Properties;
 import java.util.Set;
@@ -113,11 +112,9 @@ public class ReadOnlyIndexFailureIT extends 
BaseOwnClusterIT {
 serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
 serverProps.put("hbase.coprocessor.abortonerror", "false");
 serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
-Map clientProps = 
-Collections.singl

[2/3] phoenix git commit: PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower case column names

2016-11-04 Thread jamestaylor
PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower 
case column names


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9795683a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9795683a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9795683a

Branch: refs/heads/4.x-HBase-1.1
Commit: 9795683a0c2db03a36a146873ec03a6ecd4a0796
Parents: 3de5f80
Author: James Taylor 
Authored: Fri Nov 4 18:47:03 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 19:40:45 2016 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 37 
 .../phoenix/index/PhoenixIndexBuilder.java  | 13 +--
 2 files changed, 48 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9795683a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 9a81026..d3cb0af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -519,5 +519,42 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 
+@Test
+public void testDeleteOnSingleLowerCaseVarcharColumn() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl = " create table " + tableName + "(pk varchar primary key, 
\"counter1\" varchar, \"counter2\" smallint)";
+conn.createStatement().execute(ddl);
+String dml = "UPSERT INTO " + tableName + " VALUES('a','b') ON 
DUPLICATE KEY UPDATE \"counter1\" = null";
+conn.createStatement().execute(dml);
+conn.createStatement().execute(dml);
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals(null,rs.getString(2));
+assertFalse(rs.next());
+
+dml = "UPSERT INTO " + tableName + " VALUES('a','b',0)";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = null, \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = 'c', \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+
+rs = conn.createStatement().executeQuery("SELECT * FROM " + tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals("c",rs.getString(2));
+assertEquals(2,rs.getInt(3));
+assertFalse(rs.next());
+
+conn.close();
+}
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9795683a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
index ac1e2e4..ae0a19f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
@@ -31,6 +31,7 @@ import java.util.Map;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
@@ -192,6 +193,7 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 get.setFilter(new FirstKeyOnlyFilter());
 }
 MultiKeyValueTuple tuple;
+List flattenedCells = null;
 Listcells = ((HRegion)this.env.getRegion()).get(get, false);
 if (cells.isEmpty()) {
 if (skipFirstOp) {
@@ -201,7 +203,8 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 repeat--; // Skip first operation (if first wasn't ON 
DUPLICATE KEY IGNORE)
 }
 // Base current state off of new row
-tuple = new MultiKeyValueTuple(flat

[3/4] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-04 Thread jamestaylor
PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/62aaaf23
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/62aaaf23
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/62aaaf23

Branch: refs/heads/4.x-HBase-0.98
Commit: 62aaaf2371911ac8f79f228df96a938ddfc1a1c6
Parents: 2cfc29d
Author: James Taylor 
Authored: Thu Nov 3 19:05:45 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 22:53:31 2016 -0700

--
 .../it/java/org/apache/phoenix/end2end/IndexExtendedIT.java | 9 +
 1 file changed, 9 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/62aaaf23/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 5c037ed..bab1ae1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -55,6 +55,7 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
+import org.junit.Ignore;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
@@ -131,6 +132,9 @@ public class IndexExtendedIT extends BaseTest {
 if (!mutable || transactional) {
 return;
 }
+if (localIndex) { // FIXME: remove once this test works for local 
indexes
+return;
+}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -203,6 +207,9 @@ public class IndexExtendedIT extends BaseTest {
 
 @Test
 public void testSecondaryIndex() throws Exception {
+if (localIndex) { // FIXME: remove once this test works for local 
indexes
+return;
+}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -405,6 +412,7 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
+@Ignore
 @Test
 public void testLocalIndexScanAfterRegionSplit() throws Exception {
 // This test just needs be run once
@@ -506,6 +514,7 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
+@Ignore
 @Test
 public void testLocalIndexScanAfterRegionsMerge() throws Exception {
 // This test just needs be run once



[1/4] phoenix git commit: PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower case column names

2016-11-04 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 87421ede3 -> 57805e603


PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly when using lower 
case column names


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d4f8cc67
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d4f8cc67
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d4f8cc67

Branch: refs/heads/4.x-HBase-0.98
Commit: d4f8cc671be24219ff679e6f8f03749491c96fa7
Parents: 87421ed
Author: James Taylor 
Authored: Fri Nov 4 18:47:03 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 21:54:26 2016 -0700

--
 .../phoenix/end2end/OnDuplicateKeyIT.java   | 37 
 .../phoenix/index/PhoenixIndexBuilder.java  | 13 +--
 2 files changed, 48 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d4f8cc67/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
index 9a81026..d3cb0af 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OnDuplicateKeyIT.java
@@ -519,5 +519,42 @@ public class OnDuplicateKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 
+@Test
+public void testDeleteOnSingleLowerCaseVarcharColumn() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl = " create table " + tableName + "(pk varchar primary key, 
\"counter1\" varchar, \"counter2\" smallint)";
+conn.createStatement().execute(ddl);
+String dml = "UPSERT INTO " + tableName + " VALUES('a','b') ON 
DUPLICATE KEY UPDATE \"counter1\" = null";
+conn.createStatement().execute(dml);
+conn.createStatement().execute(dml);
+conn.commit();
+
+ResultSet rs = conn.createStatement().executeQuery("SELECT * FROM " + 
tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals(null,rs.getString(2));
+assertFalse(rs.next());
+
+dml = "UPSERT INTO " + tableName + " VALUES('a','b',0)";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = null, \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+dml = "UPSERT INTO " + tableName + " VALUES('a','b', 0) ON DUPLICATE 
KEY UPDATE \"counter1\" = 'c', \"counter2\" = \"counter2\" + 1";
+conn.createStatement().execute(dml);
+conn.commit();
+
+rs = conn.createStatement().executeQuery("SELECT * FROM " + tableName);
+assertTrue(rs.next());
+assertEquals("a",rs.getString(1));
+assertEquals("c",rs.getString(2));
+assertEquals(2,rs.getInt(3));
+assertFalse(rs.next());
+
+conn.close();
+}
+
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d4f8cc67/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
index 5e06f89..bf1d0fb 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
@@ -31,6 +31,7 @@ import java.util.Map;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
@@ -191,6 +192,7 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 get.setFilter(new FirstKeyOnlyFilter());
 }
 MultiKeyValueTuple tuple;
+List flattenedCells = null;
 Listcells = this.env.getRegion().get(get, false);
 if (cells.isEmpty()) {
 if (skipFirstOp) {
@@ -200,7 +202,8 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder {
 repeat--; // Skip first operation (if first wasn't ON 
DUPLICATE KEY IGNORE)
 }
 // B

[2/4] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-04 Thread jamestaylor
PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2cfc29d6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2cfc29d6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2cfc29d6

Branch: refs/heads/4.x-HBase-0.98
Commit: 2cfc29d6620567962ead3b877c47f6c0cbfe360f
Parents: d4f8cc6
Author: James Taylor 
Authored: Fri Nov 4 19:39:19 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 21:54:48 2016 -0700

--
 .../java/org/apache/phoenix/end2end/IndexExtendedIT.java | 11 ++-
 .../phoenix/end2end/index/ReadOnlyIndexFailureIT.java|  5 +
 2 files changed, 3 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2cfc29d6/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index 161dcb8..5c037ed 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -55,8 +55,8 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
-import org.junit.Ignore;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
@@ -68,6 +68,7 @@ import com.google.common.collect.Maps;
  * Tests for the {@link IndexTool}
  */
 @RunWith(Parameterized.class)
+@Category(NeedsOwnMiniClusterTest.class)
 public class IndexExtendedIT extends BaseTest {
 private final boolean localIndex;
 private final boolean transactional;
@@ -130,9 +131,6 @@ public class IndexExtendedIT extends BaseTest {
 if (!mutable || transactional) {
 return;
 }
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -205,9 +203,6 @@ public class IndexExtendedIT extends BaseTest {
 
 @Test
 public void testSecondaryIndex() throws Exception {
-if (localIndex) { // FIXME: remove once this test works for local 
indexes
-return;
-}
 String schemaName = generateUniqueName();
 String dataTableName = generateUniqueName();
 String dataTableFullName = SchemaUtil.getTableName(schemaName, 
dataTableName);
@@ -410,7 +405,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionSplit() throws Exception {
 // This test just needs be run once
@@ -512,7 +506,6 @@ public class IndexExtendedIT extends BaseTest {
 }
 
 // Moved from LocalIndexIT because it was causing parallel runs to hang
-@Ignore
 @Test
 public void testLocalIndexScanAfterRegionsMerge() throws Exception {
 // This test just needs be run once

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2cfc29d6/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
index 9d3d4f0..18d1744 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ReadOnlyIndexFailureIT.java
@@ -29,7 +29,6 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.Map;
 import java.util.Properties;
 import java.util.Set;
@@ -113,11 +112,9 @@ public class ReadOnlyIndexFailureIT extends 
BaseOwnClusterIT {
 serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
 serverProps.put("hbase.coprocessor.abortonerror", "false");
 serverProps.put(Indexer.CHECK_VERSION_CONF_KEY, "false");
-Map clientProps = 
-Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
"true");
 NUM_SLAVES_BASE = 4;
 

[4/4] phoenix git commit: PHOENIX-3456 Use unique table names for MutableIndexFailureIT

2016-11-04 Thread jamestaylor
PHOENIX-3456 Use unique table names for MutableIndexFailureIT


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/57805e60
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/57805e60
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/57805e60

Branch: refs/heads/4.x-HBase-0.98
Commit: 57805e603fb8d5f51d9ed73bd2beeb7b8b64be0c
Parents: 62aaaf2
Author: James Taylor 
Authored: Fri Nov 4 22:58:44 2016 -0700
Committer: James Taylor 
Committed: Fri Nov 4 22:58:44 2016 -0700

--
 .../end2end/index/MutableIndexFailureIT.java   | 17 -
 1 file changed, 12 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/57805e60/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index b968c76..9817f95 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -43,9 +43,9 @@ import 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
 import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
 import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.end2end.BaseOwnClusterIT;
 import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.PIndexState;
@@ -56,6 +56,7 @@ import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 import org.apache.phoenix.util.TestUtil;
+import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -74,20 +75,26 @@ import com.google.common.collect.Maps;
 
 @Category(NeedsOwnMiniClusterTest.class)
 @RunWith(Parameterized.class)
-public class MutableIndexFailureIT extends BaseOwnClusterIT {
-public static volatile boolean FAIL_WRITE = false;
+public class MutableIndexFailureIT extends BaseTest {
 public static final String INDEX_NAME = "IDX";
+
+public static volatile boolean FAIL_WRITE = false;
+public static volatile String fullTableName;
 
 private String tableName;
 private String indexName;
-public static volatile String fullTableName;
 private String fullIndexName;
 
 private final boolean transactional;
 private final boolean localIndex;
 private final String tableDDLOptions;
 private final boolean isNamespaceMapped;
-private String schema = "TEST";
+private String schema = generateUniqueName();
+
+@AfterClass
+public static void doTeardown() throws Exception {
+tearDownMiniCluster();
+}
 
 public MutableIndexFailureIT(boolean transactional, boolean localIndex, 
boolean isNamespaceMapped) {
 this.transactional = transactional;



  1   2   >