Jenkins build is back to normal : Phoenix-4.8-HBase-0.98 #43

2016-11-05 Thread Apache Jenkins Server
See 



phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master a31386241 -> 5cf9dc8bf


PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5cf9dc8b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5cf9dc8b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5cf9dc8b

Branch: refs/heads/master
Commit: 5cf9dc8bf1f582b3e897bb3eed4664bedc4f1cd4
Parents: a313862
Author: James Taylor 
Authored: Sat Nov 5 23:46:55 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 23:46:55 2016 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5cf9dc8b/pom.xml
--
diff --git a/pom.xml b/pom.xml
index dcfa6f4..0a21e16 100644
--- a/pom.xml
+++ b/pom.xml
@@ -271,7 +271,7 @@
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

--Xmx2500m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
+-Xmx3000m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
 
${test.output.tofile}
 kill
 
${basedir}/src/it/java



Apache-Phoenix | Master | Build Successful

2016-11-05 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-0.98 a9a3416a4 -> c07053c5a


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c07053c5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c07053c5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c07053c5

Branch: refs/heads/4.8-HBase-0.98
Commit: c07053c5a6e733da6b89aa9d9809137aac9ccde5
Parents: a9a3416
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 22:35:54 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c07053c5/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c07053c5/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.
  

phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.1 5f6929edc -> 40e7b032f


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/40e7b032
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/40e7b032
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/40e7b032

Branch: refs/heads/4.8-HBase-1.1
Commit: 40e7b032fba2ef4f482eeff4ff224f1e7cbe8540
Parents: 5f6929e
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 22:33:26 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/40e7b032/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/40e7b032/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.

phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.8-HBase-1.2 192de60cf -> 9b313cbd2


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9b313cbd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9b313cbd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9b313cbd

Branch: refs/heads/4.8-HBase-1.2
Commit: 9b313cbd2c10357244b643bbf6bfd470359dbf34
Parents: 192de60
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 22:32:07 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b313cbd/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b313cbd/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #259

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency

--
[...truncated 724 lines...]
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.78 sec - in 
org.apache.phoenix.end2end.index.GlobalIndexOptimizationIT
Running org.apache.phoenix.end2end.index.IndexExpressionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 62.468 sec - in 
org.apache.phoenix.end2end.index.DropMetadataIT
Running org.apache.phoenix.end2end.index.IndexIT
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x0007ce3db000, 212709376, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 212709376 bytes for 
committing reserved memory.
# An error report file with more information is saved as:
# 

Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 142.943 sec - 
in org.apache.phoenix.end2end.UpgradeIT
Running org.apache.phoenix.end2end.index.IndexMetadataIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.474 sec - in 
org.apache.phoenix.end2end.index.IndexMetadataIT
Running org.apache.phoenix.end2end.index.LocalIndexIT
Tests run: 99, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 844.17 sec - 
in org.apache.phoenix.end2end.HashJoinIT
Running org.apache.phoenix.end2end.index.MutableIndexIT
Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 203.607 sec - 
in org.apache.phoenix.end2end.index.LocalIndexIT
Running org.apache.phoenix.end2end.index.SaltedIndexIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.287 sec - in 
org.apache.phoenix.end2end.index.SaltedIndexIT
Running org.apache.phoenix.end2end.index.ViewIndexIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.677 sec - in 
org.apache.phoenix.end2end.index.ViewIndexIT
Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 359.77 sec - 
in org.apache.phoenix.end2end.index.IndexExpressionIT
Running org.apache.phoenix.end2end.index.txn.RollbackIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.14 sec - in 
org.apache.phoenix.end2end.index.txn.RollbackIT
Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.776 sec - in 
org.apache.phoenix.end2end.index.txn.MutableRollbackIT
Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.258 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.762 sec - in 
org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.097 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
Running org.apache.phoenix.rpc.UpdateCacheIT
Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 883.392 sec - 
in org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.674 sec - in 
org.apache.phoenix.rpc.UpdateCacheIT
Running org.apache.phoenix.trace.PhoenixTraceReaderIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.618 sec - in 
org.apache.phoenix.trace.PhoenixTraceReaderIT
Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.953 sec - in 
org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Running org.apache.phoenix.tx.FlappingTransactionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.197 sec - in 
org.apache.phoenix.tx.FlappingTransactionIT
Running org.apache.phoenix.tx.TransactionIT
Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.517 sec - in 
org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 74.807 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 88.637 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 113.496 sec - 
in org.apache.phoenix.tx.TxCheckpointIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 475.755 sec - 
in org.apache.phoenix.end2end.index.Mutabl

phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 f15c7465c -> 9582d0eaf


PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9582d0ea
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9582d0ea
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9582d0ea

Branch: refs/heads/4.x-HBase-0.98
Commit: 9582d0eaf993d3808f98939f8a3ca00b61233691
Parents: f15c746
Author: James Taylor 
Authored: Sat Nov 5 22:03:43 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 22:03:43 2016 -0700

--
 pom.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9582d0ea/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 87857b9..4959463 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-6
-6
+5
+5
 
 
 UTF-8
@@ -271,7 +271,7 @@
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

--Xmx2500m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
+-Xmx3000m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
 
${test.output.tofile}
 kill
 
${basedir}/src/it/java



Apache Phoenix - Timeout crawler - Build https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1370/

2016-11-05 Thread Apache Jenkins Server
[...truncated 21 lines...]
Looking at the log, list of test(s) that timed-out:

Build:
https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1370/


Affected test class(es):
Set(['org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT',
 'org.apache.phoenix.end2end.ArrayIT', 
'org.apache.phoenix.end2end.AggregateQueryIT', 
'org.apache.phoenix.end2end.CastAndCoerceIT'])


Build step 'Execute shell' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any


Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1370

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3439 Query using an RVC based on the base table PK is

[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency

--
[...truncated 317 lines...]
Running org.apache.phoenix.schema.stats.StatisticsScannerTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.957 sec - in 
org.apache.phoenix.mapreduce.FormatToBytesWritableMapperTest
Running org.apache.phoenix.schema.RowKeyValueAccessorTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.359 sec - in 
org.apache.phoenix.filter.SkipScanBigFilterTest
Running org.apache.phoenix.schema.PCharPadTest
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.017 sec - in 
org.apache.phoenix.schema.PCharPadTest
Running org.apache.phoenix.schema.SortOrderTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec - in 
org.apache.phoenix.schema.SortOrderTest
Running org.apache.phoenix.schema.types.PrimitiveIntPhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec - in 
org.apache.phoenix.schema.types.PrimitiveIntPhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveShortPhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.034 sec - in 
org.apache.phoenix.schema.types.PrimitiveShortPhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PDataTypeForArraysTest
Tests run: 68, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.32 sec - in 
org.apache.phoenix.schema.types.PDataTypeForArraysTest
Running org.apache.phoenix.schema.types.PVarcharArrayToStringTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.phoenix.schema.types.PVarcharArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveBytePhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.024 sec - in 
org.apache.phoenix.schema.types.PrimitiveBytePhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PDateArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.03 sec - in 
org.apache.phoenix.schema.types.PDateArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveLongPhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.015 sec - in 
org.apache.phoenix.schema.types.PrimitiveLongPhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveFloatPhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.003 sec - in 
org.apache.phoenix.schema.types.PrimitiveFloatPhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveDoublePhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.038 sec - in 
org.apache.phoenix.schema.types.PrimitiveDoublePhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PrimitiveBooleanPhoenixArrayToStringTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.027 sec - in 
org.apache.phoenix.schema.types.PrimitiveBooleanPhoenixArrayToStringTest
Running org.apache.phoenix.schema.types.PDataTypeTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.532 sec - in 
org.apache.phoenix.schema.RowKeySchemaTest
Running org.apache.phoenix.schema.SchemaUtilTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.019 sec - in 
org.apache.phoenix.schema.SchemaUtilTest
Running org.apache.phoenix.schema.ValueBitSetTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.009 sec - in 
org.apache.phoenix.schema.ValueBitSetTest
Running org.apache.phoenix.schema.PMetaDataImplTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 sec - in 
org.apache.phoenix.schema.PMetaDataImplTest
Running org.apache.phoenix.schema.SequenceAllocationTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.018 sec - in 
org.apache.phoenix.schema.SequenceAllocationTest
Running org.apache.phoenix.schema.MutationTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.17 sec - in 
org.apache.phoenix.schema.RowKeyValueAccessorTest
Running org.apache.phoenix.schema.SaltingUtilTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.012 sec - in 
org.apache.phoenix.schema.SaltingUtilTest
Running org.apache.phoenix.memory.MemoryManagerTest
Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.492 sec - in 
org.apache.phoenix.schema.types.PDataTypeTest
Running org.apache.phoenix.index.IndexMaintainerTest
Tests run: 6, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.247 sec - in 
org.apache.phoenix.memory.MemoryManagerTest
Running org.apache.phoenix.index.automated.MRJobSubmitterTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.82 sec - in 
org.apache.phoenix.schema.stats.S

[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 291624f11 -> f15c7465c


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1dcac346
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1dcac346
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1dcac346

Branch: refs/heads/4.x-HBase-0.98
Commit: 1dcac3463408059c5f2136e202dd3ae4ccd02803
Parents: 291624f
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:21:20 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1dcac346/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1dcac346/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.
  

[2/2] phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f15c7465
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f15c7465
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f15c7465

Branch: refs/heads/4.x-HBase-0.98
Commit: f15c7465c6911cd11b474ef9f043ec08cc41f245
Parents: 1dcac34
Author: James Taylor 
Authored: Sat Nov 5 21:22:29 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:22:29 2016 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f15c7465/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 5a2e542..87857b9 100644
--- a/pom.xml
+++ b/pom.xml
@@ -271,7 +271,7 @@
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

--Xmx3000m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
+-Xmx2500m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
 
${test.output.tofile}
 kill
 
${basedir}/src/it/java



[1/2] phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 c6c7181fe -> 8b6eeed62


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0ad18ed2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0ad18ed2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0ad18ed2

Branch: refs/heads/4.x-HBase-1.1
Commit: 0ad18ed2419ab3db6e49919b290cb88b1c92ce73
Parents: c6c7181
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:19:04 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ad18ed2/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ad18ed2/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.

[2/2] phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8b6eeed6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8b6eeed6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8b6eeed6

Branch: refs/heads/4.x-HBase-1.1
Commit: 8b6eeed6290b6be26f91e9a7250da7d30bfebddb
Parents: 0ad18ed
Author: James Taylor 
Authored: Sat Nov 5 21:20:33 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:20:33 2016 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8b6eeed6/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 6b1ec23..53854e9 100644
--- a/pom.xml
+++ b/pom.xml
@@ -271,7 +271,7 @@
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

--Xmx3000m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
+-Xmx2500m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
 
${test.output.tofile}
 kill
 
${basedir}/src/it/java



phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 397f60609 -> a31386241


PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a3138624
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a3138624
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a3138624

Branch: refs/heads/master
Commit: a313862411b4f6243a0a6e84e35f13b2bddd41de
Parents: 397f606
Author: James Taylor 
Authored: Sat Nov 5 21:17:56 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:17:56 2016 -0700

--
 pom.xml | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a3138624/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 2ec1460..dcfa6f4 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-6
-6
+8
+8
 
 
 UTF-8
@@ -271,7 +271,7 @@
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:2835)
 at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:490)
 -->

--Xmx3000m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
+-Xmx2500m -XX:MaxPermSize=256m 
-Djava.security.egd=file:/dev/./urandom 
"-Djava.library.path=${hadoop.library.path}${path.separator}${java.library.path}"
 -XX:NewRatio=4 -XX:SurvivorRatio=8 -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+DisableExplicitGC 
-XX:+UseCMSInitiatingOccupancyOnly -XX:+CMSClassUnloadingEnabled 
-XX:+CMSScavengeBeforeRemark -XX:CMSInitiatingOccupancyFraction=68 
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=./target/
 
${test.output.tofile}
 kill
 
${basedir}/src/it/java



phoenix git commit: PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using an index and doing a full scan instead of a point query

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master d2b4c4c71 -> 397f60609


PHOENIX-3439 Query using an RVC based on the base table PK is incorrectly using 
an index and doing a full scan instead of a point query


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/397f6060
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/397f6060
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/397f6060

Branch: refs/heads/master
Commit: 397f60609b87317f42470790167421eca74db2de
Parents: d2b4c4c
Author: James Taylor 
Authored: Sat Nov 5 21:16:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 21:16:41 2016 -0700

--
 .../org/apache/phoenix/compile/ScanRanges.java  | 18 +--
 .../apache/phoenix/optimize/QueryOptimizer.java | 15 --
 .../phoenix/compile/QueryOptimizerTest.java | 50 
 3 files changed, 75 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/397f6060/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
index 19a4692..5a1fcb7 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java
@@ -570,12 +570,24 @@ public class ScanRanges {
 return this.useSkipScanFilter ? ScanUtil.getRowKeyPosition(slotSpan, 
ranges.size()) : Math.max(getBoundPkSpan(ranges, slotSpan), 
getBoundMinMaxSlotCount());
 }
 
-public int getBoundMinMaxSlotCount() {
+private int getBoundMinMaxSlotCount() {
 if (minMaxRange == KeyRange.EMPTY_RANGE || minMaxRange == 
KeyRange.EVERYTHING_RANGE) {
 return 0;
 }
-// The minMaxRange is always a single key
-return 1 + slotSpan[0];
+ImmutableBytesWritable ptr = new ImmutableBytesWritable();
+// We don't track how many slots are bound for the minMaxRange, so we 
need
+// to traverse the upper and lower range key and count the slots.
+int lowerCount = 0;
+int maxOffset = schema.iterator(minMaxRange.getLowerRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+lowerCount++;
+}
+int upperCount = 0;
+maxOffset = schema.iterator(minMaxRange.getUpperRange(), ptr);
+for (int pos = 0; Boolean.TRUE.equals(schema.next(ptr, pos, 
maxOffset)); pos++) {
+upperCount++;
+}
+return Math.max(lowerCount, upperCount);
 }
 
 public int getBoundSlotCount() {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/397f6060/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
index bd9c811..d77b14b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/optimize/QueryOptimizer.java
@@ -309,13 +309,16 @@ public class QueryOptimizer {
 /**
  * Order the plans among all the possible ones from best to worst.
  * Since we don't keep stats yet, we use the following simple algorithm:
- * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose among those.
+ * 1) If the query is a point lookup (i.e. we have a set of exact row 
keys), choose that one immediately.
  * 2) If the query has an ORDER BY and a LIMIT, choose the plan that has 
all the ORDER BY expression
  * in the same order as the row key columns.
  * 3) If there are more than one plan that meets (1&2), choose the plan 
with:
- *a) the most row key columns that may be used to form the start/stop 
scan key.
+ *a) the most row key columns that may be used to form the start/stop 
scan key (i.e. bound slots).
  *b) the plan that preserves ordering for a group by.
- *c) the data table plan
+ *c) the non local index table plan
+ * TODO: We should make more of a cost based choice: The largest number of 
bound slots does not necessarily
+ * correspond to the least bytes scanned. We could consider the slots 
bound for upper and lower ranges 
+ * separately, or we could calculate the bytes scanned between the start 
and stop row of each table.
  * @param plans the list of candidate plans
  * @return list of plans ordered from best to worst.
  */
@@ -380,1

Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1369

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can 
be

[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency

--
[...truncated 1242 lines...]
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.296 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1202390: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478375043796.a060877d4e5fc5f1f88f29a6ed2826fc., 
hostname=penates.apache.org,47021,1478375036900, seqNum=1
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:309)
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(CsvBulkLoadToolIT.java:297)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1202390: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478375043796.a060877d4e5fc5f1f88f29a6ed2826fc., 
hostname=penates.apache.org,47021,1478375036900, seqNum=1
Caused by: java.net.SocketTimeoutException: Call to 
penates.apache.org/67.195.81.186:47021 failed because 
java.net.SocketTimeoutException: 120 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/67.195.81.186:41574 remote=penates.apache.org/67.195.81.186:47021]
Caused by: java.net.SocketTimeoutException: 120 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/67.195.81.186:41574 
remote=penates.apache.org/67.195.81.186:47021]

testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.297 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1221676: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478375043796.a060877d4e5fc5f1f88f29a6ed2826fc., 
hostname=penates.apache.org,47021,1478375036900, seqNum=1
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1221676: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478375043796.a060877d4e5fc5f1f88f29a6ed2826fc., 
hostname=penates.apache.org,47021,1478375036900, seqNum=1
Caused by: java.io.IOException: 
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE5
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcSe

Apache-Phoenix | Master | Build Successful

2016-11-05 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1368

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

--
[...truncated 1064 lines...]
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE6
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.387 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1202403: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478360224688.24f660085dbfa1bfdcfc04e030b4943c., 
hostname=penates.apache.org,60463,1478360215108, seqNum=1
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:309)
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(CsvBulkLoadToolIT.java:297)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1202403: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478360224688.24f660085dbfa1bfdcfc04e030b4943c., 
hostname=penates.apache.org,60463,1478360215108, seqNum=1
Caused by: java.net.SocketTimeoutException: Call to 
penates.apache.org/67.195.81.186:60463 failed because 
java.net.SocketTimeoutException: 120 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/67.195.81.186:37741 remote=penates.apache.org/67.195.81.186:60463]
Caused by: java.net.SocketTimeoutException: 120 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/67.195.81.186:37741 
remote=penates.apache.org/67.195.81.186:60463]

testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.387 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1222085: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478360224688.24f660085dbfa1bfdcfc04e030b4943c., 
hostname=penates.apache.org,60463,1478360215108, seqNum=1
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1222085: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478360224688.24f660085dbfa1bfdcfc04e030b4943c., 
hostname=penates.apache.org,60463,1478360215108, seqNum=1
Caused by: java.io.IOException: 
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE5
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcS

Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #258

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3457 Adjust build settings in pom to improve consistency

--
[...truncated 393 lines...]
Running org.apache.phoenix.expression.RoundFloorCeilExpressionsTest
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in 
org.apache.phoenix.expression.ArrayToStringFunctionTest
Running org.apache.phoenix.expression.RegexpSubstrFunctionTest
Tests run: 37, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.539 sec - in 
org.apache.phoenix.expression.ArrayPrependFunctionTest
Running org.apache.phoenix.expression.OctetLengthFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.871 sec - in 
org.apache.phoenix.expression.NullValueTest
Running org.apache.phoenix.expression.CoerceExpressionTest
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.014 sec - in 
org.apache.phoenix.expression.CoerceExpressionTest
Running org.apache.phoenix.expression.SortOrderExpressionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.12 sec - in 
org.apache.phoenix.expression.OctetLengthFunctionTest
Running org.apache.phoenix.expression.ArrayConstructorExpressionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec - in 
org.apache.phoenix.expression.ArrayConstructorExpressionTest
Running org.apache.phoenix.expression.PowerFunctionTest
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 sec - in 
org.apache.phoenix.expression.SortOrderExpressionTest
Running org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in 
org.apache.phoenix.expression.PowerFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.356 sec - in 
org.apache.phoenix.expression.RegexpSubstrFunctionTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.007 sec - in 
org.apache.phoenix.expression.function.ExternalSqlTypeIdFunctionTest
Running org.apache.phoenix.expression.function.InstrFunctionTest
Running org.apache.phoenix.expression.ArrayFillFunctionTest
Running org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.011 sec - in 
org.apache.phoenix.expression.function.InstrFunctionTest
Running org.apache.phoenix.expression.RegexpReplaceFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.046 sec - in 
org.apache.phoenix.expression.RegexpReplaceFunctionTest
Running org.apache.phoenix.expression.SqrtFunctionTest
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec - in 
org.apache.phoenix.expression.ArrayFillFunctionTest
Running org.apache.phoenix.expression.CbrtFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.021 sec - in 
org.apache.phoenix.expression.SqrtFunctionTest
Running org.apache.phoenix.expression.LnLogFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.029 sec - in 
org.apache.phoenix.expression.CbrtFunctionTest
Running org.apache.phoenix.expression.ColumnExpressionTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.013 sec - in 
org.apache.phoenix.expression.ColumnExpressionTest
Running org.apache.phoenix.expression.AbsFunctionTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.051 sec - in 
org.apache.phoenix.expression.AbsFunctionTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.17 sec - in 
org.apache.phoenix.expression.function.BuiltinFunctionConstructorTest
Running org.apache.phoenix.query.OrderByTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec - in 
org.apache.phoenix.expression.LnLogFunctionTest
Running org.apache.phoenix.expression.StringToArrayFunctionTest
Running org.apache.phoenix.query.ConnectionlessTest
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.092 sec - in 
org.apache.phoenix.expression.StringToArrayFunctionTest
Running org.apache.phoenix.query.KeyRangeIntersectTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.155 sec - in 
org.apache.phoenix.query.KeyRangeIntersectTest
Running org.apache.phoenix.query.KeyRangeUnionTest
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in 
org.apache.phoenix.query.KeyRangeUnionTest
Running org.apache.phoenix.query.HBaseFactoryProviderTest
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.01 sec - in 
org.apache.phoenix.query.HBaseFactoryProviderTest
Running org.apache.phoenix.query.ScannerLeaseRenewalTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.775 sec - in 
org.apache.phoenix.query.OrderByTest
Running org.apache.phoenix.query.ParallelIteratorsSplitTest
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.609 sec - in 
org.apac

phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master f0295e5ca -> d2b4c4c71


PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d2b4c4c7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d2b4c4c7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d2b4c4c7

Branch: refs/heads/master
Commit: d2b4c4c71905e21241d6276a3cb1ae3a20bb5079
Parents: f0295e5
Author: James Taylor 
Authored: Sat Nov 5 11:07:47 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 11:07:47 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d2b4c4c7/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 02dc865..2ec1460 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-4
-4
+6
+6
 
 
 UTF-8



phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 2c8b70a4c -> c6c7181fe


PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c6c7181f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c6c7181f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c6c7181f

Branch: refs/heads/4.x-HBase-1.1
Commit: c6c7181fe2b9f416396ba180ecdc6af48cca66fe
Parents: 2c8b70a
Author: James Taylor 
Authored: Sat Nov 5 11:06:44 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 11:06:44 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c6c7181f/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 7c5a53d..6b1ec23 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-4
-4
+6
+6
 
 
 UTF-8



[2/2] phoenix git commit: PHOENIX-3457 Adjust build settings in pom to improve consistency

2016-11-05 Thread jamestaylor
PHOENIX-3457 Adjust build settings in pom to improve consistency


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/291624f1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/291624f1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/291624f1

Branch: refs/heads/4.x-HBase-0.98
Commit: 291624f114e7ae91180df5a12d7fe6954358774e
Parents: 0ec5774
Author: James Taylor 
Authored: Sat Nov 5 11:05:25 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 11:05:25 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/291624f1/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 7de9d82..5a2e542 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-4
-4
+6
+6
 
 
 UTF-8



[1/2] phoenix git commit: PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 7bcf5bac4 -> 291624f11


PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can be investigated


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0ec57748
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0ec57748
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0ec57748

Branch: refs/heads/4.x-HBase-0.98
Commit: 0ec57748bee09fd17c76ddc4e0221a3e701cdd18
Parents: 7bcf5ba
Author: James Taylor 
Authored: Sat Nov 5 11:04:33 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 11:04:33 2016 -0700

--
 .../src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java  | 4 
 1 file changed, 4 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ec57748/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
index bab1ae1..6195fa5 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexExtendedIT.java
@@ -106,7 +106,9 @@ public class IndexExtendedIT extends BaseTest {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
+/* Commenting out due to potential issue in PHOENIX-3448 and general 
flappiness
 clientProps.put(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
+*/
 clientProps.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.TRUE.toString());
 setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet()
 .iterator()));
@@ -117,8 +119,10 @@ public class IndexExtendedIT extends BaseTest {
 return Arrays.asList(new Boolean[][] { 
  { false, false, false, false }, { false, false, false, true 
}, { false, false, true, false }, { false, false, true, true }, 
  { false, true, false, false }, { false, true, false, true }, 
{ false, true, true, false }, { false, true, true, true }, 
+ /* Commenting out due to potential issue in PHOENIX-3448 and 
general flappiness
  { true, false, false, false }, { true, false, false, true }, 
{ true, false, true, false }, { true, false, true, true }, 
  { true, true, false, false }, { true, true, false, true }, { 
true, true, true, false }, { true, true, true, true } 
+ */
});
 }
 



Jenkins build is back to normal : Phoenix | Master #1476

2016-11-05 Thread Apache Jenkins Server
See 



svn commit: r16848 - in /dev/phoenix: apache-phoenix-4.9.0-HBase-0.98-rc3/ apache-phoenix-4.9.0-HBase-0.98-rc3/bin/ apache-phoenix-4.9.0-HBase-0.98-rc3/src/ apache-phoenix-4.9.0-HBase-1.1-rc3/ apache-

2016-11-05 Thread mujtaba
Author: mujtaba
Date: Sat Nov  5 16:50:53 2016
New Revision: 16848

Log:
Phoenix 4.9.0-RC3

Added:
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/src/

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/src/apache-phoenix-4.9.0-HBase-0.98-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/src/apache-phoenix-4.9.0-HBase-0.98-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/src/apache-phoenix-4.9.0-HBase-0.98-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/src/apache-phoenix-4.9.0-HBase-0.98-src.tar.gz.sha
dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/
dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/bin/

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/bin/apache-phoenix-4.9.0-HBase-1.1-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/bin/apache-phoenix-4.9.0-HBase-1.1-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/bin/apache-phoenix-4.9.0-HBase-1.1-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/bin/apache-phoenix-4.9.0-HBase-1.1-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/src/

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/src/apache-phoenix-4.9.0-HBase-1.1-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/src/apache-phoenix-4.9.0-HBase-1.1-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/src/apache-phoenix-4.9.0-HBase-1.1-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-1.1-rc3/src/apache-phoenix-4.9.0-HBase-1.1-src.tar.gz.sha
dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/
dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/bin/

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/bin/apache-phoenix-4.9.0-HBase-1.2-bin.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/bin/apache-phoenix-4.9.0-HBase-1.2-bin.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/bin/apache-phoenix-4.9.0-HBase-1.2-bin.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/bin/apache-phoenix-4.9.0-HBase-1.2-bin.tar.gz.sha
dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/src/

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/src/apache-phoenix-4.9.0-HBase-1.2-src.tar.gz
   (with props)

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/src/apache-phoenix-4.9.0-HBase-1.2-src.tar.gz.asc

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/src/apache-phoenix-4.9.0-HBase-1.2-src.tar.gz.md5

dev/phoenix/apache-phoenix-4.9.0-HBase-1.2-rc3/src/apache-phoenix-4.9.0-HBase-1.2-src.tar.gz.sha

Added: 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz
==
Binary file - no diff available.

Propchange: 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz
--
svn:mime-type = application/octet-stream

Added: 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.asc
==
--- 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.asc
 (added)
+++ 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.asc
 Sat Nov  5 16:50:53 2016
@@ -0,0 +1,17 @@
+-BEGIN PGP SIGNATURE-
+Version: GnuPG v1.4.11 (GNU/Linux)
+
+iQIcBAABAgAGBQJYHf/IAAoJEDv8s5KUYReOmVoQAKP8TUhs2tILvJF/WbrgV7hY
+52lLtg7cvmwOxVFu/ow3xdQdY4Dq1HhoKWNeonIDGQOoqWLAdmdtIo0D/qVjxWUU
+q3m4jfkSpLApYa+5ldwAxe5wjdtXLv0El7I5uAnsCyWoqh13cKt2S2ItJK4krRZr
+hqPQruFfULQGWrJLvTma6vnDJGXmKqLsMc9GwPFM5cybTtVKSUrySc6Fp4kjg4Wj
+HXik/Jh0Fm/jrQbIbBP5mO/3cqNu6HOptw4nHmhRtxtwRzWwI5QWv1kG6xHhy5kJ
+XfJY/kLFsBVaJAQdAovoNXa53rzeTbCajRhAuWm4HMzfJrXq/sHQ5br4pvK+1AZc
+lq5h9D3DxGopcCVwqVvcx3MvTVeGPA+5i1jaTevabBzqqqWykTqkLoqF9nSEgK0c
+lIkLZib1Yia32XCtQ/PidJNdvdnH8dPvFfPS/lWydeoxOfjBZzeeAZ4lI+uBx0wz
+/Rf2aB5K849TpLW+dIF2p+dK9F5e06uIB9QSpoyiUvglWuWD49NA46PsBzKSIAiN
+rpTqs4OMa06dlxKLwVgYHd6lvXJmSsEQtD3Hu5LVNOo0pWBTTJB7ap0K0/0E9qa0
+bYY/xYPortvhXC5ME78IqwKOMDkt1a5AIcxt+V1jech/Rh2BfswwH4bAvasX85yS
+BTs5bXTURRwDwqLf7+Kx
+=0zi6
+-END PGP SIGNATURE-

Added: 
dev/phoenix/apache-phoenix-4.9.0-HBase-0.98-rc3/bin/apache-phoenix-4.9.0-HBase-0.98-bin.tar.gz.md5

[phoenix] Git Push Summary

2016-11-05 Thread mujtaba
Repository: phoenix
Updated Tags:  refs/tags/v4.9.0-HBase-1.2-rc3 [created] fbb108057


[phoenix] Git Push Summary

2016-11-05 Thread mujtaba
Repository: phoenix
Updated Tags:  refs/tags/v4.9.0-HBase-1.1-rc3 [created] e5188dea0


[phoenix] Git Push Summary

2016-11-05 Thread mujtaba
Repository: phoenix
Updated Tags:  refs/tags/v4.9.0-HBase-0.98-rc3 [created] 29438f752


Build failed in Jenkins: Phoenix-4.x-HBase-1.1 #257

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

--
[...truncated 717 lines...]
Running org.apache.phoenix.end2end.index.GlobalIndexOptimizationIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.715 sec - in 
org.apache.phoenix.end2end.index.GlobalIndexOptimizationIT
Running org.apache.phoenix.end2end.index.IndexExpressionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.327 sec - in 
org.apache.phoenix.end2end.index.DropMetadataIT
Running org.apache.phoenix.end2end.index.IndexIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 272.819 sec - 
in org.apache.phoenix.end2end.UpgradeIT
Running org.apache.phoenix.end2end.index.IndexMetadataIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.771 sec - in 
org.apache.phoenix.end2end.index.IndexMetadataIT
Running org.apache.phoenix.end2end.index.LocalIndexIT
Tests run: 102, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 867.292 sec - 
in org.apache.phoenix.end2end.SortMergeJoinIT
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x0007be356000, 172408832, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 172408832 bytes for 
committing reserved memory.
# An error report file with more information is saved as:
# 

Tests run: 66, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 377.246 sec - 
in org.apache.phoenix.end2end.index.IndexExpressionIT
Running org.apache.phoenix.end2end.index.MutableIndexIT
Running org.apache.phoenix.end2end.index.SaltedIndexIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.305 sec - in 
org.apache.phoenix.end2end.index.SaltedIndexIT
Running org.apache.phoenix.end2end.index.ViewIndexIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.742 sec - in 
org.apache.phoenix.end2end.index.ViewIndexIT
Running org.apache.phoenix.end2end.index.txn.MutableRollbackIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 58.108 sec - in 
org.apache.phoenix.end2end.index.txn.MutableRollbackIT
Running org.apache.phoenix.end2end.index.txn.RollbackIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.102 sec - in 
org.apache.phoenix.end2end.index.txn.RollbackIT
Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.881 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.263 sec - in 
org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.705 sec - in 
org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.614 sec - in 
org.apache.phoenix.iterate.RoundRobinResultIteratorIT
Running org.apache.phoenix.rpc.UpdateCacheIT
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.55 sec - in 
org.apache.phoenix.rpc.UpdateCacheIT
Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.542 sec - in 
org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
Running org.apache.phoenix.trace.PhoenixTraceReaderIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.577 sec - in 
org.apache.phoenix.trace.PhoenixTraceReaderIT
Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 86.73 sec - in 
org.apache.phoenix.trace.PhoenixTracingEndToEndIT
Running org.apache.phoenix.tx.FlappingTransactionIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.254 sec - in 
org.apache.phoenix.tx.FlappingTransactionIT
Running org.apache.phoenix.tx.TransactionIT
Tests run: 144, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 744.258 sec - 
in org.apache.phoenix.end2end.index.IndexIT
Running org.apache.phoenix.tx.TxCheckpointIT
Tests run: 19, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 71.854 sec - 
in org.apache.phoenix.tx.TransactionIT
Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 468.86 sec - 
in org.apache.phoenix.end2end.index.MutableIndexIT
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 114.32 sec - 
in org.apache.phoenix.tx.TxCheckpointIT

Results :

Tests run: 1632, Failures: 0, Errors: 0, Skipped: 1

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-te

Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1367

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

--
[...truncated 1077 lines...]
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE6
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.615 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1202387: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478344710729.3876bc396596d88a29737624dc0e99a2., 
hostname=penates.apache.org,34160,1478344703990, seqNum=1
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:309)
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(CsvBulkLoadToolIT.java:297)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1202387: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478344710729.3876bc396596d88a29737624dc0e99a2., 
hostname=penates.apache.org,34160,1478344703990, seqNum=1
Caused by: java.net.SocketTimeoutException: Call to 
penates.apache.org/67.195.81.186:34160 failed because 
java.net.SocketTimeoutException: 120 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/67.195.81.186:38820 remote=penates.apache.org/67.195.81.186:34160]
Caused by: java.net.SocketTimeoutException: 120 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/67.195.81.186:38820 
remote=penates.apache.org/67.195.81.186:34160]

testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.616 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1222066: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478344710729.3876bc396596d88a29737624dc0e99a2., 
hostname=penates.apache.org,34160,1478344703990, seqNum=1
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1222066: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478344710729.3876bc396596d88a29737624dc0e99a2., 
hostname=penates.apache.org,34160,1478344703990, seqNum=1
Caused by: java.io.IOException: 
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE5
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcS

phoenix git commit: PHOENIX-3457 Disable parallel run of tests and increase memory

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 443263513 -> 7bcf5bac4


PHOENIX-3457 Disable parallel run of tests and increase memory


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7bcf5bac
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7bcf5bac
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7bcf5bac

Branch: refs/heads/4.x-HBase-0.98
Commit: 7bcf5bac48b66ce777c1d9cd7b6e6ae7d8e2dbe5
Parents: 4432635
Author: James Taylor 
Authored: Sat Nov 5 07:27:17 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 07:27:17 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7bcf5bac/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 5a2e542..7de9d82 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-6
-6
+4
+4
 
 
 UTF-8



phoenix git commit: PHOENIX-3457 Disable parallel run of tests and increase memory

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 cd3f310b1 -> 2c8b70a4c


PHOENIX-3457 Disable parallel run of tests and increase memory


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2c8b70a4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2c8b70a4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2c8b70a4

Branch: refs/heads/4.x-HBase-1.1
Commit: 2c8b70a4c9dea19761613cb40241b987f080d668
Parents: cd3f310
Author: James Taylor 
Authored: Sat Nov 5 07:26:46 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 07:26:46 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c8b70a4/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 6b1ec23..7c5a53d 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-6
-6
+4
+4
 
 
 UTF-8



phoenix git commit: PHOENIX-3457 Disable parallel run of tests and increase memory

2016-11-05 Thread jamestaylor
Repository: phoenix
Updated Branches:
  refs/heads/master 10c81438b -> f0295e5ca


PHOENIX-3457 Disable parallel run of tests and increase memory


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f0295e5c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f0295e5c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f0295e5c

Branch: refs/heads/master
Commit: f0295e5cab015515b3e7a687949f2e332e3c752c
Parents: 10c8143
Author: James Taylor 
Authored: Sat Nov 5 07:25:41 2016 -0700
Committer: James Taylor 
Committed: Sat Nov 5 07:25:41 2016 -0700

--
 pom.xml | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f0295e5c/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 2ec1460..02dc865 100644
--- a/pom.xml
+++ b/pom.xml
@@ -117,8 +117,8 @@
 2.5.2
 
 
-6
-6
+4
+4
 
 
 UTF-8



Build failed in Jenkins: Phoenix | 4.x-HBase-0.98 #1366

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3454 ON DUPLICATE KEY construct doesn't work correctly 
when

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can 
be

[jamestaylor] PHOENIX-3449 Ignore hanging IndexExtendedIT tests until they can 
be

[jamestaylor] PHOENIX-3456 Use unique table names for MutableIndexFailureIT

--
[...truncated 1206 lines...]
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE6
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3502)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31194)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2149)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)


testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.933 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=1202380: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478329780172.25dfbd92164f9aad7b9bcf313800a78c., 
hostname=priapus.apache.org,55123,1478329770910, seqNum=1
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneIndexTable(CsvBulkLoadToolIT.java:309)
at 
org.apache.phoenix.end2end.CsvBulkLoadToolIT.testImportOneLocalIndexTable(CsvBulkLoadToolIT.java:297)
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=1202380: row '  TABLE5_IDX' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478329780172.25dfbd92164f9aad7b9bcf313800a78c., 
hostname=priapus.apache.org,55123,1478329770910, seqNum=1
Caused by: java.net.SocketTimeoutException: Call to 
priapus.apache.org/67.195.81.188:55123 failed because 
java.net.SocketTimeoutException: 120 millis timeout while waiting for 
channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
local=/67.195.81.188:33671 remote=priapus.apache.org/67.195.81.188:55123]
Caused by: java.net.SocketTimeoutException: 120 millis timeout while 
waiting for channel to be ready for read. ch : 
java.nio.channels.SocketChannel[connected local=/67.195.81.188:33671 
remote=priapus.apache.org/67.195.81.188:55123]

testImportOneLocalIndexTable(org.apache.phoenix.end2end.CsvBulkLoadToolIT)  
Time elapsed: 2,406.933 sec  <<< ERROR!
org.apache.phoenix.exception.PhoenixIOException: callTimeout=120, 
callDuration=115: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478329780172.25dfbd92164f9aad7b9bcf313800a78c., 
hostname=priapus.apache.org,55123,1478329770910, seqNum=1
Caused by: java.net.SocketTimeoutException: callTimeout=120, 
callDuration=115: row '  TABLE5' on table 'SYSTEM.CATALOG' at 
region=SYSTEM.CATALOG,,1478329780172.25dfbd92164f9aad7b9bcf313800a78c., 
hostname=priapus.apache.org,55123,1478329770910, seqNum=1
Caused by: java.io.IOException: 
java.io.IOException: Timed out waiting for lock for row: \x00\x00TABLE5
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLockInternal(HRegion.java:3804)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3766)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getRowLock(HRegion.java:3830)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.acquireLock(MetaDataEndpointImpl.java:1568)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:1710)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16297)
at 
org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6041)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3520)
at 
org.apache.hadoop.hbase

Build failed in Jenkins: Phoenix | Master #1475

2016-11-05 Thread Apache Jenkins Server
See 

Changes:

[jamestaylor] PHOENIX-3456 Use unique table names for MutableIndexFailureIT

[jamestaylor] PHOENIX-3457 Disable parallel run of tests and increase memory

--
[...truncated 676 lines...]
Running org.apache.phoenix.end2end.RegexpSplitFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.027 sec - in 
org.apache.phoenix.end2end.QueryWithOffsetIT
Running org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.452 sec - in 
org.apache.phoenix.end2end.QueryMoreIT
Running org.apache.phoenix.end2end.ReverseFunctionIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.557 sec - in 
org.apache.phoenix.end2end.RegexpSubstrFunctionIT
Running org.apache.phoenix.end2end.ReverseScanIT
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.379 sec - in 
org.apache.phoenix.end2end.ReverseFunctionIT
Running org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.504 sec - in 
org.apache.phoenix.end2end.ReverseScanIT
Running org.apache.phoenix.end2end.SerialIteratorsIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.304 sec - in 
org.apache.phoenix.end2end.SerialIteratorsIT
Running org.apache.phoenix.end2end.ServerExceptionIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 16.754 sec - 
in org.apache.phoenix.end2end.RegexpSplitFunctionIT
Running org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.268 sec - in 
org.apache.phoenix.end2end.ServerExceptionIT
Running org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.32 sec - in 
org.apache.phoenix.end2end.SignFunctionEnd2EndIT
Running org.apache.phoenix.end2end.SkipScanQueryIT
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.112 sec - in 
org.apache.phoenix.end2end.SkipScanAfterManualSplitIT
Running org.apache.phoenix.end2end.SortMergeJoinIT
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 20.469 sec - 
in org.apache.phoenix.end2end.SkipScanQueryIT
Running org.apache.phoenix.end2end.SortMergeJoinMoreIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.377 sec - in 
org.apache.phoenix.end2end.SortMergeJoinMoreIT
Running org.apache.phoenix.end2end.SortOrderIT
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 55.874 sec - 
in org.apache.phoenix.end2end.RoundFloorCeilFuncIT
Running org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.659 sec - in 
org.apache.phoenix.end2end.SpooledTmpFileDeleteIT
Running org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.264 sec - in 
org.apache.phoenix.end2end.SqrtFunctionEnd2EndIT
Running org.apache.phoenix.end2end.StatementHintsIT
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.294 sec - in 
org.apache.phoenix.end2end.StatementHintsIT
Running org.apache.phoenix.end2end.StddevIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.3 sec - in 
org.apache.phoenix.end2end.StddevIT
Running org.apache.phoenix.end2end.StoreNullsIT
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.58 sec - in 
org.apache.phoenix.end2end.StoreNullsIT
Running org.apache.phoenix.end2end.StringIT
Tests run: 45, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 108.909 sec - 
in org.apache.phoenix.end2end.SortOrderIT
Running org.apache.phoenix.end2end.StringToArrayFunctionIT
Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x0007a3ad9000, 199868416, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 199868416 bytes for 
committing reserved memory.
# An error report file with more information is saved as:
# 

Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.189 sec - 
in org.apache.phoenix.end2end.StringIT
Running org.apache.phoenix.end2end.SubqueryIT
Tests run: 22, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 45.354 sec - 
in org.apache.phoenix.end2end.StringToArrayFunctionIT

Results :

Tests run: 817, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] --- maven-failsafe-plugin:2.19.1:integration-test 
(ClientManagedTimeTests) @ phoenix-core ---

---
 T E S T S
---
Running org.apache.phoenix.end2end.ArrayIT
Running org.apache.phoenix.end2end.ColumnProjectionOptimizationIT
Running org.apache.phoenix.end2end.CaseStatementIT
Running org