Apache-Phoenix | 4.x-HBase-1.3 | Build Successful

2018-11-27 Thread Apache Jenkins Server
4.x-HBase-1.3 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.3

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/lastCompletedBuild/testReport/

Changes
[tdsilva] PHOENIX-4765 Add client and server side config property to enable



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #277

2018-11-27 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Phoenix | Master #2250

2018-11-27 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-4765 Add client and server side config property to enable

--
[...truncated 136.49 KB...]
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.741 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.003 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.45 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 350.436 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.267 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 61.498 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.29 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 406.68 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 910.482 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 122.237 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 868.483 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 376.649 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 650.838 
s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ConcurrentMutationsIT.testLockUntilMVCCAdvanced:385 Expected data 
table row count to match expected:<1> but was:<0>
[ERROR]   ConcurrentMutationsIT.testRowLockDuringPreBatchMutateWhenIndexed:329 
Expected data table row count to match expected:<1> but was:<0>
[ERROR] Errors: 
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitForwardScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitReverseScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[ERROR]   
MutableIndexSplitReverseScanIT.testSplitDuringIndexScan:30->MutableIndexSplitIT.testSplitDuringIndexScan:87->MutableIndexSplitIT.splitDuringScan:152
 » StaleRegionBoundaryCache
[INFO] 
[ERROR] Tests run: 3466, Failures: 2, Errors: 4, Skipped: 10
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[WARNING] Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.001 
s - in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Running org.apache.phoenix.end2end.ConnectionUtilIT
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 
org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT
[INFO] Running org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.953 s 
- in org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
[INFO] Running 

phoenix git commit: PHOENIX-4765 Add client and server side config property to enable rollback of splittable System Catalog if required (addendum)

2018-11-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master 0a84ad6c1 -> 70d5cd9e3


PHOENIX-4765 Add client and server side config property to enable rollback of 
splittable System Catalog if required (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/70d5cd9e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/70d5cd9e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/70d5cd9e

Branch: refs/heads/master
Commit: 70d5cd9e348dcc31eeea93cc9452527666d9b6d2
Parents: 0a84ad6
Author: Thomas D'Silva 
Authored: Tue Nov 27 13:46:19 2018 -0800
Committer: Thomas D'Silva 
Committed: Tue Nov 27 13:47:23 2018 -0800

--
 .../java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/70d5cd9e/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 5d2fb54..8790819 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2692,6 +2692,8 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 
 private MetaDataResponse processRemoteRegionMutations(byte[] 
systemTableName,
 List remoteMutations, MetaDataProtos.MutationCode 
mutationCode) throws IOException {
+if (remoteMutations.isEmpty())
+return null;
 MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
 try (Table hTable =
 ServerUtil.getHTableForCoprocessorScan(env,

http://git-wip-us.apache.org/repos/asf/phoenix/blob/70d5cd9e/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
index 1c17da9..8e17749 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
@@ -348,6 +348,9 @@ public interface QueryServices extends SQLCloseable {
 // feature
 //
 // By default this config is false meaning that rolling back the upgrade 
is not possible
+// If this config is true and you want to rollback the upgrade be sure to 
run the sql commands in
+// UpgradeUtil.addParentToChildLink which will recreate the PARENT->CHILD 
links in SYSTEM.CATALOG. This is needed
+// as from 4.15 onwards the PARENT->CHILD links are stored in a separate 
SYSTEM.CHILD_LINK table.
 public static final String ALLOW_SPLITTABLE_SYSTEM_CATALOG_ROLLBACK =
 "phoenix.allow.system.catalog.rollback";
 



phoenix git commit: PHOENIX-4765 Add client and server side config property to enable rollback of splittable System Catalog if required (addendum)

2018-11-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 bb67a6534 -> 1813af615


PHOENIX-4765 Add client and server side config property to enable rollback of 
splittable System Catalog if required (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1813af61
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1813af61
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1813af61

Branch: refs/heads/4.x-HBase-1.3
Commit: 1813af61598ba2dba3cbada7272ced16836ff77d
Parents: bb67a65
Author: Thomas D'Silva 
Authored: Tue Nov 27 13:46:19 2018 -0800
Committer: Thomas D'Silva 
Committed: Tue Nov 27 13:47:11 2018 -0800

--
 .../java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1813af61/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 14caca3..d138132 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2678,6 +2678,8 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 private MetaDataResponse processRemoteRegionMutations(byte[] 
systemTableName,
 List remoteMutations, MetaDataProtos.MutationCode 
mutationCode) throws IOException {
+if (remoteMutations.isEmpty())
+return null;
 MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
 try (Table hTable =
 env.getTable(

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1813af61/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
index 728f3f8..becd116 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
@@ -345,6 +345,9 @@ public interface QueryServices extends SQLCloseable {
 // feature
 //
 // By default this config is false meaning that rolling back the upgrade 
is not possible
+// If this config is true and you want to rollback the upgrade be sure to 
run the sql commands in
+// UpgradeUtil.addParentToChildLink which will recreate the PARENT->CHILD 
links in SYSTEM.CATALOG. This is needed
+// as from 4.15 onwards the PARENT->CHILD links are stored in a separate 
SYSTEM.CHILD_LINK table.
 public static final String ALLOW_SPLITTABLE_SYSTEM_CATALOG_ROLLBACK =
 "phoenix.allow.system.catalog.rollback";
 



phoenix git commit: PHOENIX-4765 Add client and server side config property to enable rollback of splittable System Catalog if required (addendum)

2018-11-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 3ca3552d7 -> d0a115ce0


PHOENIX-4765 Add client and server side config property to enable rollback of 
splittable System Catalog if required (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d0a115ce
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d0a115ce
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d0a115ce

Branch: refs/heads/4.x-HBase-1.4
Commit: d0a115ce05909180e99515d37ecf7689a8505611
Parents: 3ca3552
Author: Thomas D'Silva 
Authored: Tue Nov 27 13:46:19 2018 -0800
Committer: Thomas D'Silva 
Committed: Tue Nov 27 13:47:17 2018 -0800

--
 .../java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d0a115ce/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 14caca3..d138132 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2678,6 +2678,8 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 private MetaDataResponse processRemoteRegionMutations(byte[] 
systemTableName,
 List remoteMutations, MetaDataProtos.MutationCode 
mutationCode) throws IOException {
+if (remoteMutations.isEmpty())
+return null;
 MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
 try (Table hTable =
 env.getTable(

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d0a115ce/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
index 1c17da9..8e17749 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
@@ -348,6 +348,9 @@ public interface QueryServices extends SQLCloseable {
 // feature
 //
 // By default this config is false meaning that rolling back the upgrade 
is not possible
+// If this config is true and you want to rollback the upgrade be sure to 
run the sql commands in
+// UpgradeUtil.addParentToChildLink which will recreate the PARENT->CHILD 
links in SYSTEM.CATALOG. This is needed
+// as from 4.15 onwards the PARENT->CHILD links are stored in a separate 
SYSTEM.CHILD_LINK table.
 public static final String ALLOW_SPLITTABLE_SYSTEM_CATALOG_ROLLBACK =
 "phoenix.allow.system.catalog.rollback";
 



phoenix git commit: PHOENIX-4765 Add client and server side config property to enable rollback of splittable System Catalog if required (addendum)

2018-11-27 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 c86b3e42e -> 93e284647


PHOENIX-4765 Add client and server side config property to enable rollback of 
splittable System Catalog if required (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/93e28464
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/93e28464
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/93e28464

Branch: refs/heads/4.x-HBase-1.2
Commit: 93e28464780a18ac26793c71188b5ebcbaee2011
Parents: c86b3e4
Author: Thomas D'Silva 
Authored: Tue Nov 27 13:46:19 2018 -0800
Committer: Thomas D'Silva 
Committed: Tue Nov 27 13:46:19 2018 -0800

--
 .../java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java | 2 ++
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 3 +++
 2 files changed, 5 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/93e28464/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 14caca3..d138132 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2678,6 +2678,8 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 private MetaDataResponse processRemoteRegionMutations(byte[] 
systemTableName,
 List remoteMutations, MetaDataProtos.MutationCode 
mutationCode) throws IOException {
+if (remoteMutations.isEmpty())
+return null;
 MetaDataResponse.Builder builder = MetaDataResponse.newBuilder();
 try (Table hTable =
 env.getTable(

http://git-wip-us.apache.org/repos/asf/phoenix/blob/93e28464/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
index 728f3f8..becd116 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/QueryServices.java
@@ -345,6 +345,9 @@ public interface QueryServices extends SQLCloseable {
 // feature
 //
 // By default this config is false meaning that rolling back the upgrade 
is not possible
+// If this config is true and you want to rollback the upgrade be sure to 
run the sql commands in
+// UpgradeUtil.addParentToChildLink which will recreate the PARENT->CHILD 
links in SYSTEM.CATALOG. This is needed
+// as from 4.15 onwards the PARENT->CHILD links are stored in a separate 
SYSTEM.CHILD_LINK table.
 public static final String ALLOW_SPLITTABLE_SYSTEM_CATALOG_ROLLBACK =
 "phoenix.allow.system.catalog.rollback";
 



[23/28] phoenix git commit: PHOENIX-5028 Delay acquisition of port and increase Tephra test discovery timeouts

2018-11-27 Thread pboado
PHOENIX-5028 Delay acquisition of port and increase Tephra test discovery 
timeouts


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1a09ebf9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1a09ebf9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1a09ebf9

Branch: refs/heads/4.x-cdh5.15
Commit: 1a09ebf9d57c0dd50947cc33f1ec8415b54e6e9b
Parents: b20b21d
Author: James Taylor 
Authored: Sat Nov 17 23:13:59 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:10 2018 +

--
 .../end2end/ConnectionQueryServicesTestImpl.java   |  4 +++-
 .../transaction/OmidTransactionProvider.java   |  2 +-
 .../transaction/PhoenixTransactionProvider.java|  2 +-
 .../transaction/TephraTransactionProvider.java | 17 ++---
 .../phoenix/query/QueryServicesTestImpl.java   |  3 ---
 5 files changed, 15 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1a09ebf9/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConnectionQueryServicesTestImpl.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConnectionQueryServicesTestImpl.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConnectionQueryServicesTestImpl.java
index 6ebaa65..969e0f4 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConnectionQueryServicesTestImpl.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConnectionQueryServicesTestImpl.java
@@ -35,6 +35,7 @@ import 
org.apache.phoenix.transaction.PhoenixTransactionService;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.transaction.TransactionFactory.Provider;
 import org.apache.phoenix.util.SQLCloseables;
+import org.apache.phoenix.util.TestUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
@@ -102,7 +103,8 @@ public class ConnectionQueryServicesTestImpl extends 
ConnectionQueryServicesImpl
 public synchronized PhoenixTransactionClient 
initTransactionClient(Provider provider) throws SQLException {
 PhoenixTransactionService txService = txServices[provider.ordinal()];
 if (txService == null) {
-txService = txServices[provider.ordinal()] = 
provider.getTransactionProvider().getTransactionService(config, connectionInfo);
+int port = TestUtil.getRandomPort();
+txService = txServices[provider.ordinal()] = 
provider.getTransactionProvider().getTransactionService(config, connectionInfo, 
port);
 }
 return super.initTransactionClient(provider);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1a09ebf9/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
index c53215c..bace2bc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/OmidTransactionProvider.java
@@ -72,7 +72,7 @@ public class OmidTransactionProvider implements 
PhoenixTransactionProvider {
 }
 
 @Override
-public PhoenixTransactionService getTransactionService(Configuration 
config, ConnectionInfo connectionInfo) throws  SQLException{
+public PhoenixTransactionService getTransactionService(Configuration 
config, ConnectionInfo connectionInfo, int port) throws  SQLException{
 return new OmidTransactionService();
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1a09ebf9/phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionProvider.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionProvider.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionProvider.java
index b7f660e..3af554b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionProvider.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/transaction/PhoenixTransactionProvider.java
@@ -50,7 +50,7 @@ public interface PhoenixTransactionProvider {
 public PhoenixTransactionContext getTransactionContext(PhoenixConnection 
connection) throws SQLException;
 
 public PhoenixTransactionClient getTransactionClient(Configuration config, 
ConnectionInfo connectionInfo) throws SQLException;
-public PhoenixTransactionService getTransactionService(Configuration 
config, ConnectionInfo connectionInfo) throws  

[18/28] phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test

2018-11-27 Thread pboado
PHOENIX-5000 Make SecureUserConnectionsTest as Integration test


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/60c19250
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/60c19250
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/60c19250

Branch: refs/heads/4.x-cdh5.15
Commit: 60c19250116d378a5f6f725d9dde9a8284d86ef5
Parents: 1c65619
Author: Karan Mehta 
Authored: Tue Oct 30 19:40:00 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:56 2018 +

--
 .../phoenix/jdbc/SecureUserConnectionsIT.java   | 459 +++
 .../phoenix/jdbc/SecureUserConnectionsTest.java | 459 ---
 2 files changed, 459 insertions(+), 459 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/60c19250/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
new file mode 100644
index 000..eaf981b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
+import org.apache.phoenix.query.ConfigurationFactory;
+import org.apache.phoenix.util.InstanceResolver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
+ * trivial to directly test this, so we exploit the knowledge that the caching 
is driven by
+ * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
+ * collide and when they do not.
+ */
+public class SecureUserConnectionsIT {
+private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
+private static final int KDC_START_ATTEMPTS = 10;
+
+private static final File TEMP_DIR = new File(getClassTempDir());
+private static final File KEYTAB_DIR = new File(TEMP_DIR, "keytabs");
+private static final File KDC_DIR = new File(TEMP_DIR, "kdc");
+private static final List USER_KEYTAB_FILES = new ArrayList<>();
+private static final List SERVICE_KEYTAB_FILES = new ArrayList<>();
+private static final int NUM_USERS = 3;
+private static final Properties EMPTY_PROPERTIES = new Properties();
+private static final String BASE_URL = PhoenixRuntime.JDBC_PROTOCOL + 
":localhost:2181";
+
+private static MiniKdc KDC;
+
+@BeforeClass
+public static void setupKdc() throws Exception {
+ensureIsEmptyDirectory(KDC_DIR);
+ensureIsEmptyDirectory(KEYTAB_DIR);
+// Create and start the KDC. MiniKDC appears to have a race condition 
in how it does
+// port allocation (with apache-ds). See PHOENIX-3287.
+boolean started = false;
+for 

[19/28] phoenix git commit: PHOENIX-5013 Increase timeout for Tephra discovery service

2018-11-27 Thread pboado
PHOENIX-5013 Increase timeout for Tephra discovery service


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b28a241c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b28a241c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b28a241c

Branch: refs/heads/4.x-cdh5.15
Commit: b28a241c8b38414ee4cba6a3fc1a74a5cf8cdd39
Parents: 60c1925
Author: Thomas D'Silva 
Authored: Thu Nov 15 20:33:26 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:59 2018 +

--
 .../test/java/org/apache/phoenix/query/QueryServicesTestImpl.java   | 1 +
 1 file changed, 1 insertion(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b28a241c/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
index 49fb8e8..eae951a 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
@@ -130,6 +130,7 @@ public final class QueryServicesTestImpl extends 
BaseQueryServicesImpl {
 .set(TxConstants.Service.CFG_DATA_TX_CLIENT_RETRY_STRATEGY, 
"n-times")
 .set(TxConstants.Service.CFG_DATA_TX_CLIENT_ATTEMPTS, 1)
 .set(TxConstants.Service.CFG_DATA_TX_BIND_PORT, 
TestUtil.getRandomPort())
+
.set(TxConstants.Service.CFG_DATA_TX_CLIENT_DISCOVERY_TIMEOUT_SEC, 60)
 .set(TxConstants.Manager.CFG_TX_SNAPSHOT_DIR, 
Files.createTempDir().getAbsolutePath())
 .set(TxConstants.Manager.CFG_TX_TIMEOUT, 
DEFAULT_TXN_TIMEOUT_SECONDS)
 .set(TxConstants.Manager.CFG_TX_SNAPSHOT_INTERVAL, 5L)



[14/28] phoenix git commit: PHOENIX-5017 Fix testRecreateViewWhoseParentWasDropped test flapper

2018-11-27 Thread pboado
PHOENIX-5017 Fix testRecreateViewWhoseParentWasDropped test flapper


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7afa9549
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7afa9549
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7afa9549

Branch: refs/heads/4.x-cdh5.15
Commit: 7afa9549df2e5f14f963a5c61d0cce006fb4a014
Parents: 21c3a7c
Author: Thomas D'Silva 
Authored: Tue Nov 13 23:42:19 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:45 2018 +

--
 .../phoenix/coprocessor/MetaDataEndpointImpl.java   | 12 
 1 file changed, 8 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7afa9549/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index d899e32..5562340 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2035,8 +2035,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 }
 
-// check if the table was dropped, but had child views that were 
have not yet
-// been cleaned up by compaction
+// check if the table was dropped, but had child views that were 
have not yet been cleaned up
 if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
 dropChildViews(env, tenantIdBytes, schemaName, tableName);
 }
@@ -2434,8 +2433,13 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 MetaDataClient client = new MetaDataClient(connection);
 org.apache.phoenix.parse.TableName viewTableName = 
org.apache.phoenix.parse.TableName
 .create(Bytes.toString(viewSchemaName), 
Bytes.toString(viewName));
-client.dropTable(
-new DropTableStatement(viewTableName, 
PTableType.VIEW, false, true, true));
+try {
+client.dropTable(
+new DropTableStatement(viewTableName, 
PTableType.VIEW, false, true, true));
+}
+catch (TableNotFoundException e) {
+logger.info("Ignoring view "+viewTableName+" as it has 
already been dropped");
+}
 }
 }
 }



[25/28] phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test (Addendum)

2018-11-27 Thread pboado
PHOENIX-5000 Make SecureUserConnectionsTest as Integration test (Addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bb17957c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bb17957c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bb17957c

Branch: refs/heads/4.x-cdh5.15
Commit: bb17957ca2938093dd94bed6052cde92e28d176a
Parents: d2e4a73
Author: Karan Mehta 
Authored: Mon Nov 19 22:48:32 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:15 2018 +

--
 .../it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java  | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bb17957c/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
index eaf981b..1ab54d2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
 import org.apache.phoenix.query.ConfigurationFactory;
 import org.apache.phoenix.util.InstanceResolver;
@@ -47,6 +48,7 @@ import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 /**
  * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
@@ -54,6 +56,7 @@ import org.junit.Test;
  * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
  * collide and when they do not.
  */
+@Category(NeedsOwnMiniClusterTest.class)
 public class SecureUserConnectionsIT {
 private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
 private static final int KDC_START_ATTEMPTS = 10;



[16/28] phoenix git commit: PHOENIX-5008 (Addendum): CQSI.init should not bubble up RetriableUpgradeException to client in case of an UpgradeRequiredException

2018-11-27 Thread pboado
PHOENIX-5008 (Addendum): CQSI.init should not bubble up 
RetriableUpgradeException to client in case of an UpgradeRequiredException


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bcf2cc7f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bcf2cc7f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bcf2cc7f

Branch: refs/heads/4.x-cdh5.15
Commit: bcf2cc7f69a4a107229a01e514c9f6ec7fe4d534
Parents: f33f7d7
Author: Chinmay Kulkarni 
Authored: Wed Nov 14 01:11:53 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:50 2018 +

--
 .../phoenix/end2end/SystemCatalogCreationOnConnectionIT.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bcf2cc7f/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index eadd391..7a5f80c 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -504,7 +504,7 @@ public class SystemCatalogCreationOnConnectionIT {
  */
 private Set getHBaseTables() throws IOException {
 Set tables = new HashSet<>();
-for (TableName tn : testUtil.getAdmin().listTableNames()) {
+for (TableName tn : testUtil.getHBaseAdmin().listTableNames()) {
 tables.add(tn.getNameAsString());
 }
 return tables;



[17/28] phoenix git commit: PHOENIX-4841 staging patch commit.

2018-11-27 Thread pboado
PHOENIX-4841 staging patch commit.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1c656192
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1c656192
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1c656192

Branch: refs/heads/4.x-cdh5.15
Commit: 1c656192f6d0ea061630c7d1ef8ab3f0970e7071
Parents: bcf2cc7
Author: Daniel Wong 
Authored: Wed Oct 10 00:38:11 2018 +0100
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:54 2018 +

--
 .../org/apache/phoenix/end2end/QueryMoreIT.java | 171 +--
 .../apache/phoenix/compile/WhereOptimizer.java  |  58 ++-
 .../expression/ComparisonExpression.java|  18 +-
 .../RowValueConstructorExpressionRewriter.java  |  54 ++
 .../org/apache/phoenix/schema/RowKeySchema.java |   4 +
 ...wValueConstructorExpressionRewriterTest.java |  78 +
 6 files changed, 362 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1c656192/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
index 04272fa..2b1d31e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryMoreIT.java
@@ -17,11 +17,13 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertNull;
-import static org.junit.Assert.assertTrue;
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -37,18 +39,19 @@ import java.util.Map;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.util.Base64;
-import org.apache.hadoop.hbase.util.Pair;
-import org.apache.phoenix.jdbc.PhoenixConnection;
-import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.util.PhoenixRuntime;
-import org.apache.phoenix.util.TestUtil;
-import org.junit.Test;
 
-import com.google.common.collect.Lists;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
+import static org.junit.Assert.assertTrue;
 
 
 public class QueryMoreIT extends ParallelStatsDisabledIT {
 
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
+
 private String dataTableName;
 //queryAgainstTenantSpecificView = true, dataTableSalted = true 
 @Test
@@ -510,4 +513,148 @@ public class QueryMoreIT extends ParallelStatsDisabledIT {
 stmt.execute();
 }
 }
+
+@Test public void testRVCWithDescAndAscendingPK() throws Exception {
+final Connection conn = DriverManager.getConnection(getUrl());
+String fullTableName = generateUniqueName();
+try (Statement stmt = conn.createStatement()) {
+stmt.execute("CREATE TABLE " + fullTableName + "(\n"
++ "ORGANIZATION_ID CHAR(15) NOT NULL,\n" + "SCORE 
VARCHAR NOT NULL,\n"
++ "ENTITY_ID VARCHAR NOT NULL\n"
++ "CONSTRAINT PAGE_SNAPSHOT_PK PRIMARY KEY (\n"
++ "ORGANIZATION_ID,\n" + "SCORE DESC,\n" + 
"ENTITY_ID\n"
++ ")\n" + ") MULTI_TENANT=TRUE");
+}
+
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES ('org1','c','1')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES ('org1','b','3')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES ('org1','b','4')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES ('org1','a','2')");
+conn.commit();
+
+try (Statement stmt = conn.createStatement()) {
+final ResultSet
+rs =
+stmt.executeQuery("SELECT score, entity_id \n" + "FROM " + 
fullTableName + "\n"
++ "WHERE organization_id = 'org1'\n"
++ "AND (score, entity_id) < ('b', '4')\n"
++ "ORDER BY score DESC, 

[09/28] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-27 Thread pboado
PHOENIX-4996: Refactor PTableImpl to use Builder Pattern


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1767244a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1767244a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1767244a

Branch: refs/heads/4.x-cdh5.15
Commit: 1767244a04e90b9d0c39b1f149342ee02e5c9a9d
Parents: 7eb336d
Author: Chinmay Kulkarni 
Authored: Fri Nov 2 21:00:09 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:22 2018 +

--
 .../apache/phoenix/compile/DeleteCompiler.java  |5 +-
 .../apache/phoenix/compile/FromCompiler.java|   66 +-
 .../apache/phoenix/compile/JoinCompiler.java|   53 +-
 .../compile/TupleProjectionCompiler.java|   60 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   41 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   12 +-
 .../coprocessor/MetaDataEndpointImpl.java   |   96 +-
 .../UngroupedAggregateRegionObserver.java   |6 +-
 .../coprocessor/WhereConstantParser.java|3 +-
 .../query/ConnectionlessQueryServicesImpl.java  |9 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  215 ++-
 .../apache/phoenix/schema/PMetaDataImpl.java|   28 +-
 .../org/apache/phoenix/schema/PTableImpl.java   | 1259 +++---
 .../org/apache/phoenix/schema/TableRef.java |   17 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   32 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   33 +-
 16 files changed, 1303 insertions(+), 632 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1767244a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 583085e..8c9a930 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -89,7 +89,6 @@ import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.transaction.PhoenixTransactionProvider.Feature;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.IndexUtil;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
@@ -615,7 +614,9 @@ public class DeleteCompiler {
 }
 });
 }
-PTable projectedTable = PTableImpl.makePTable(table, 
PTableType.PROJECTED, adjustedProjectedColumns);
+PTable projectedTable = PTableImpl.builderWithColumns(table, 
adjustedProjectedColumns)
+.setType(PTableType.PROJECTED)
+.build();
 final TableRef projectedTableRef = new TableRef(projectedTable, 
targetTableRef.getLowerBoundTimeStamp(), targetTableRef.getTimeStamp());
 
 QueryPlan bestPlanToBe = dataPlan;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1767244a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index efc66a9..2701af0 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -32,8 +32,6 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
-import org.apache.phoenix.exception.SQLExceptionCode;
-import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.parse.AliasedNode;
@@ -82,6 +80,7 @@ import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ProjectedColumn;
+import org.apache.phoenix.schema.RowKeySchema;
 import org.apache.phoenix.schema.SchemaNotFoundException;
 import org.apache.phoenix.schema.SortOrder;
 import org.apache.phoenix.schema.TableNotFoundException;
@@ -284,7 +283,8 @@ public class FromCompiler {
 column.getTimestamp());
 projectedColumns.add(projectedColumn);
 }
-PTable t = PTableImpl.makePTable(table, projectedColumns);
+PTable t = 

[20/28] phoenix git commit: PHOENIX-5024 - Cleanup anonymous inner classes in PostDDLCompiler

2018-11-27 Thread pboado
PHOENIX-5024 - Cleanup anonymous inner classes in PostDDLCompiler


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/590f88bd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/590f88bd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/590f88bd

Branch: refs/heads/4.x-cdh5.15
Commit: 590f88bdc0d93771e0659f0e20f67da0d99e001d
Parents: b28a241
Author: Geoffrey Jacoby 
Authored: Fri Nov 16 17:55:49 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:02 2018 +

--
 .../apache/phoenix/compile/PostDDLCompiler.java | 478 ++-
 1 file changed, 258 insertions(+), 220 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/590f88bd/phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java
index 709534e..a74c5f1 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/PostDDLCompiler.java
@@ -87,248 +87,286 @@ public class PostDDLCompiler {
 final long timestamp) throws SQLException {
 PhoenixStatement statement = new PhoenixStatement(connection);
 final StatementContext context = new StatementContext(
-statement, 
-new ColumnResolver() {
+statement,
+new MultipleTableRefColumnResolver(tableRefs),
+scan,
+new SequenceManager(statement));
+return new PostDDLMutationPlan(context, tableRefs, timestamp, emptyCF, 
deleteList, projectCFs);
+}
 
-@Override
-public List getTables() {
-return tableRefs;
-}
+private static class MultipleTableRefColumnResolver implements 
ColumnResolver {
 
-@Override
-public TableRef resolveTable(String schemaName, String 
tableName) throws SQLException {
-throw new UnsupportedOperationException();
-}
+private final List tableRefs;
 
-@Override
-public ColumnRef resolveColumn(String schemaName, String 
tableName, String colName)
-throws SQLException {
-throw new UnsupportedOperationException();
-}
+public MultipleTableRefColumnResolver(List tableRefs) {
+this.tableRefs = tableRefs;
+}
 
-   @Override
-   public List getFunctions() {
-   return 
Collections.emptyList();
-   }
-
-   @Override
-   public PFunction resolveFunction(String 
functionName)
-   throws SQLException {
-   throw new 
FunctionNotFoundException(functionName);
-   }
-
-   @Override
-   public boolean hasUDFs() {
-   return false;
-   }
-
-   @Override
-   public PSchema resolveSchema(String 
schemaName) throws SQLException {
-   throw new 
SchemaNotFoundException(schemaName);
-   }
-
-   @Override
-   public List getSchemas() {
-   throw new 
UnsupportedOperationException();
-   }
-
-},
-scan,
-new SequenceManager(statement));
-return new BaseMutationPlan(context, Operation.UPSERT /* FIXME */) {
-
-@Override
-public MutationState execute() throws SQLException {
-if (tableRefs.isEmpty()) {
-return new MutationState(0, 1000, connection);
-}
-boolean wasAutoCommit = connection.getAutoCommit();
-try {
-connection.setAutoCommit(true);
-SQLException sqlE = null;
-/*
- * Handles:
- * 1) deletion of 

[12/28] phoenix git commit: PHOENIX-5013 Increase timeout for Tephra discovery service

2018-11-27 Thread pboado
PHOENIX-5013 Increase timeout for Tephra discovery service


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a0e98599
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a0e98599
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a0e98599

Branch: refs/heads/4.x-cdh5.15
Commit: a0e98599b8ffeca26c1d316d59585ccc7df6daa9
Parents: b296ddc
Author: James Taylor 
Authored: Sat Nov 10 19:07:02 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:39 2018 +

--
 .../apache/phoenix/query/QueryServicesTestImpl.java   |  6 +++---
 .../test/java/org/apache/phoenix/util/TestUtil.java   | 14 ++
 2 files changed, 17 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a0e98599/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
index 841abb6..49fb8e8 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
@@ -25,8 +25,8 @@ import 
org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec;
 import org.apache.phoenix.transaction.OmidTransactionProvider;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.TestUtil;
 import org.apache.tephra.TxConstants;
-import org.apache.twill.internal.utils.Networks;
 
 
 /**
@@ -129,12 +129,12 @@ public final class QueryServicesTestImpl extends 
BaseQueryServicesImpl {
 .set(TxConstants.Manager.CFG_DO_PERSIST, false)
 .set(TxConstants.Service.CFG_DATA_TX_CLIENT_RETRY_STRATEGY, 
"n-times")
 .set(TxConstants.Service.CFG_DATA_TX_CLIENT_ATTEMPTS, 1)
-.set(TxConstants.Service.CFG_DATA_TX_BIND_PORT, 
Networks.getRandomPort())
+.set(TxConstants.Service.CFG_DATA_TX_BIND_PORT, 
TestUtil.getRandomPort())
 .set(TxConstants.Manager.CFG_TX_SNAPSHOT_DIR, 
Files.createTempDir().getAbsolutePath())
 .set(TxConstants.Manager.CFG_TX_TIMEOUT, 
DEFAULT_TXN_TIMEOUT_SECONDS)
 .set(TxConstants.Manager.CFG_TX_SNAPSHOT_INTERVAL, 5L)
 // setup default test configs for Omid
-.set(OmidTransactionProvider.OMID_TSO_PORT, 
Networks.getRandomPort())
+.set(OmidTransactionProvider.OMID_TSO_PORT, 
TestUtil.getRandomPort())
 ;
 }
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/a0e98599/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java 
b/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
index f0a26b9..f3faa0c 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
@@ -36,6 +36,7 @@ import static org.junit.Assert.fail;
 import java.io.File;
 import java.io.IOException;
 import java.math.BigDecimal;
+import java.net.ServerSocket;
 import java.sql.Connection;
 import java.sql.Date;
 import java.sql.DriverManager;
@@ -1105,4 +1106,17 @@ public class TestUtil {
 }
 return filteredData;
 }
+
+/**
+ * Find a random free port in localhost for binding.
+ * @return A port number or -1 for failure.
+ */
+public static int getRandomPort() {
+try (ServerSocket socket = new ServerSocket(0)) {
+socket.setReuseAddress(true);
+return socket.getLocalPort();
+} catch (IOException e) {
+return -1;
+}
+}
 }



[27/28] phoenix git commit: PHOENIX-5026 Addendum; test-fix.

2018-11-27 Thread pboado
PHOENIX-5026 Addendum; test-fix.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/027d21e2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/027d21e2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/027d21e2

Branch: refs/heads/4.x-cdh5.15
Commit: 027d21e2a87aadaae030d9a06fc25ec8a59e4267
Parents: f6b7594
Author: Lars Hofhansl 
Authored: Thu Nov 22 21:11:19 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:22 2018 +

--
 .../java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java  | 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/027d21e2/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index c56296c..6fad376 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -192,6 +192,8 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
+// cleanup after ourselves
+conn.createStatement().execute("DROP SEQUENCE keys");
 admin.close();
 conn.close();
 }



[15/28] phoenix git commit: PHOENIX-5008: CQSI.init should not bubble up RetriableUpgradeException to client in case of an UpgradeRequiredException

2018-11-27 Thread pboado
PHOENIX-5008: CQSI.init should not bubble up RetriableUpgradeException to 
client in case of an UpgradeRequiredException


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f33f7d7c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f33f7d7c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f33f7d7c

Branch: refs/heads/4.x-cdh5.15
Commit: f33f7d7c92ab75520b15fa158c7feccfb7041cae
Parents: 7afa954
Author: Chinmay Kulkarni 
Authored: Sat Nov 10 03:22:57 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:48 2018 +

--
 .../SystemCatalogCreationOnConnectionIT.java| 97 +---
 .../query/ConnectionQueryServicesImpl.java  |  4 +-
 2 files changed, 84 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f33f7d7c/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index a1685c44..eadd391 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -21,9 +21,11 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
+import static org.apache.phoenix.query.BaseTest.generateUniqueName;
 
 import java.io.IOException;
 import java.sql.Connection;
+import java.sql.DriverManager;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -42,6 +44,7 @@ import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.UpgradeRequiredException;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver;
 import org.apache.phoenix.jdbc.PhoenixTestDriver;
 import org.apache.phoenix.query.ConnectionQueryServices;
@@ -69,6 +72,12 @@ public class SystemCatalogCreationOnConnectionIT {
 private static final String PHOENIX_SYSTEM_CATALOG = "SYSTEM.CATALOG";
 private static final String EXECUTE_UPGRADE_COMMAND = "EXECUTE UPGRADE";
 private static final String MODIFIED_MAX_VERSIONS ="5";
+private static final String CREATE_TABLE_STMT = "CREATE TABLE %s"
++ " (k1 VARCHAR NOT NULL, k2 VARCHAR, CONSTRAINT PK PRIMARY 
KEY(K1,K2))";
+private static final String SELECT_STMT = "SELECT * FROM %s";
+private static final String DELETE_STMT = "DELETE FROM %s";
+private static final String CREATE_INDEX_STMT = "CREATE INDEX DUMMY_IDX ON 
%s (K1) INCLUDE (K2)";
+private static final String UPSERT_STMT = "UPSERT INTO %s VALUES ('A', 
'B')";
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
   "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.FUNCTION",
@@ -167,12 +176,8 @@ public class SystemCatalogCreationOnConnectionIT {
 UpgradeUtil.doNotUpgradeOnFirstConnection(propsDoNotUpgradePropSet);
 SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver 
driver =
   new 
SystemCatalogCreationOnConnectionIT.PhoenixSysCatCreationTestingDriver(ReadOnlyProps.EMPTY_PROPS);
-try {
-driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
-fail("Client should not be able to create SYSTEM.CATALOG since we 
set the doNotUpgrade property");
-} catch (Exception e) {
-assertTrue(e instanceof UpgradeRequiredException);
-}
+
+driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
 assertTrue(hbaseTables.size() == 0);
@@ -428,6 +433,70 @@ public class SystemCatalogCreationOnConnectionIT {
 assertEquals(Integer.parseInt(MODIFIED_MAX_VERSIONS), 
verifyModificationTableMetadata(driver, PHOENIX_SYSTEM_CATALOG));
 }
 
+// Test the case when an end-user uses the vanilla PhoenixDriver to create 
a connection and a
+// requirement for upgrade is detected. In this case, the user should get 
a connection on which
+// they are only able to run "EXECUTE UPGRADE"
+@Test
+public void testExecuteUpgradeSameConnWithPhoenixDriver() throws Exception 
{
+// Register 

[26/28] phoenix git commit: PHOENIX-5026 Add client setting to disable server side mutations.

2018-11-27 Thread pboado
PHOENIX-5026 Add client setting to disable server side mutations.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f6b75942
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f6b75942
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f6b75942

Branch: refs/heads/4.x-cdh5.15
Commit: f6b75942701dbf90d7dc3d69be6265130e69ff94
Parents: bb17957
Author: Lars Hofhansl 
Authored: Thu Nov 22 03:53:14 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:18 2018 +

--
 .../org/apache/phoenix/end2end/DeleteIT.java|  62 ---
 .../end2end/UpsertSelectAutoCommitIT.java   |  26 +++--
 .../apache/phoenix/end2end/UpsertSelectIT.java  | 103 +--
 .../apache/phoenix/compile/DeleteCompiler.java  |   6 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   6 +-
 .../org/apache/phoenix/query/QueryServices.java |   3 +
 .../phoenix/query/QueryServicesOptions.java |   3 +
 7 files changed, 159 insertions(+), 50 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f6b75942/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
index 5e65927..39210fa 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
@@ -40,12 +40,26 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
-
+@RunWith(Parameterized.class)
 public class DeleteIT extends ParallelStatsDisabledIT {
 private static final int NUMBER_OF_ROWS = 20;
 private static final int NTH_ROW_NULL = 5;
-
+
+private final String allowServerSideMutations;
+
+public DeleteIT(String allowServerSideMutations) {
+this.allowServerSideMutations = allowServerSideMutations;
+}
+
+@Parameters(name="DeleteIT_allowServerSideMutations={0}") // name is used 
by failsafe as file name in reports
+public static Object[] data() {
+return new Object[] {"true", "false"};
+}
+
 private static String initTableValues(Connection conn) throws SQLException 
{
 String tableName = generateUniqueName();
 ensureTableCreated(getUrl(), tableName, "IntIntKeyTest");
@@ -75,7 +89,9 @@ public class DeleteIT extends ParallelStatsDisabledIT {
 }
 
 private void testDeleteFilter(boolean autoCommit) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
+Properties props = new Properties();
+props.setProperty(QueryServices.ENABLE_SERVER_SIDE_MUTATIONS, 
allowServerSideMutations);
+Connection conn = DriverManager.getConnection(getUrl(), props);
 String tableName = initTableValues(conn);
 
 assertTableCount(conn, tableName, NUMBER_OF_ROWS);
@@ -102,7 +118,9 @@ public class DeleteIT extends ParallelStatsDisabledIT {
 }
 
 private void testDeleteByFilterAndRow(boolean autoCommit) throws 
SQLException {
-Connection conn = DriverManager.getConnection(getUrl());
+Properties props = new Properties();
+props.setProperty(QueryServices.ENABLE_SERVER_SIDE_MUTATIONS, 
allowServerSideMutations);
+Connection conn = DriverManager.getConnection(getUrl(), props);
 String tableName = initTableValues(conn);
 
 assertTableCount(conn, tableName, NUMBER_OF_ROWS);
@@ -167,7 +185,9 @@ public class DeleteIT extends ParallelStatsDisabledIT {
 }
 
 private void testDeleteRange(boolean autoCommit, boolean createIndex, 
boolean local) throws Exception {
-Connection conn = DriverManager.getConnection(getUrl());
+Properties props = new Properties();
+props.setProperty(QueryServices.ENABLE_SERVER_SIDE_MUTATIONS, 
allowServerSideMutations);
+Connection conn = DriverManager.getConnection(getUrl(), props);
 String tableName = initTableValues(conn);
 String indexName = generateUniqueName();
 String localIndexName = generateUniqueName();
@@ -298,7 +318,9 @@ public class DeleteIT extends ParallelStatsDisabledIT {
 private void testDeleteAllFromTableWithIndex(boolean autoCommit, boolean 
isSalted, boolean localIndex) throws Exception {
 Connection con = null;
 try {
-con = DriverManager.getConnection(getUrl());
+Properties props = new Properties();
+

[24/28] phoenix git commit: PHOENIX-5029 Increase parallelism of tests to decrease test time

2018-11-27 Thread pboado
PHOENIX-5029 Increase parallelism of tests to decrease test time


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d2e4a737
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d2e4a737
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d2e4a737

Branch: refs/heads/4.x-cdh5.15
Commit: d2e4a737e87faa2b7148404e73ae047236bd2dbc
Parents: 1a09ebf
Author: James Taylor 
Authored: Sat Nov 17 23:18:39 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:13 2018 +

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d2e4a737/pom.xml
--
diff --git a/pom.xml b/pom.xml
index b6577ec..bcd8130 100644
--- a/pom.xml
+++ b/pom.xml
@@ -165,7 +165,7 @@
 
 
 8
-4
+7
 false
 false
 



[1/3] phoenix git commit: PHOENIX-4971 Drop index will execute successfully using Incorrect name of parent tables

2018-11-27 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.15 505551251 -> f8836f7a2


PHOENIX-4971 Drop index will execute successfully using Incorrect name of 
parent tables


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ce3c451f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ce3c451f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ce3c451f

Branch: refs/heads/4.x-cdh5.15
Commit: ce3c451fc6e3dfd598b2de302901f5d1195bc3e3
Parents: 5055512
Author: Jaanai 
Authored: Sat Nov 24 17:22:49 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:20:58 2018 +

--
 .../java/org/apache/phoenix/end2end/ViewIT.java | 76 ++--
 .../phoenix/end2end/index/DropMetadataIT.java   | 23 +-
 .../phoenix/end2end/index/IndexMetadataIT.java  |  5 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  2 +-
 .../phoenix/exception/SQLExceptionCode.java |  2 +
 .../apache/phoenix/schema/MetaDataClient.java   | 16 +
 6 files changed, 83 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ce3c451f/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 090ccaa..6318dca 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -59,6 +59,7 @@ import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
@@ -908,60 +909,61 @@ public class ViewIT extends SplitSystemCatalogIT {
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.TRUE.toString());
 
 try (Connection conn = DriverManager.getConnection(getUrl(), props);
-HBaseAdmin admin =
-
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
 
 conn.createStatement().execute("CREATE SCHEMA " + NS);
 
 // test for a view that is in non-default schema
-HTableDescriptor desc = new HTableDescriptor(TableName.valueOf(NS, 
TBL));
-desc.addFamily(new HColumnDescriptor(CF));
-admin.createTable(desc);
+{
+HTableDescriptor desc = new 
HTableDescriptor(TableName.valueOf(NS, TBL));
+desc.addFamily(new HColumnDescriptor(CF));
+admin.createTable(desc);
 
-String view1 = NS + "." + TBL;
-conn.createStatement().execute(
-"CREATE VIEW " + view1 + " (PK VARCHAR PRIMARY KEY, " + CF + 
".COL VARCHAR)");
+String view1 = NS + "." + TBL;
+conn.createStatement().execute(
+"CREATE VIEW " + view1 + " (PK VARCHAR PRIMARY KEY, " 
+ CF + ".COL VARCHAR)");
 
-assertTrue(QueryUtil
-.getExplainPlan(
+assertTrue(QueryUtil.getExplainPlan(
 conn.createStatement().executeQuery("explain select * 
from " + view1))
-.contains(NS + ":" + TBL));
+.contains(NS + ":" + TBL));
 
-
+conn.createStatement().execute("DROP VIEW " + view1);
+}
+
+// test for a view whose name contains a dot (e.g. "AAA.BBB") in 
default schema (for backward compatibility)
+{
+HTableDescriptor desc = new 
HTableDescriptor(TableName.valueOf(NS + "." + TBL));
+desc.addFamily(new HColumnDescriptor(CF));
+admin.createTable(desc);
 
-// test for a view whose name contains a dot (e.g. "AAA.BBB") in 
default schema (for
-// backward compatibility)
-desc = new HTableDescriptor(TableName.valueOf(NS + "." + TBL));
-desc.addFamily(new HColumnDescriptor(CF));
-admin.createTable(desc);
+String view2 = "\"" + NS + "." + TBL + "\"";
+conn.createStatement().execute(
+"CREATE VIEW " + view2 + " (PK VARCHAR PRIMARY KEY, " 
+ CF + ".COL VARCHAR)");
 
-String view2 = "\"" + NS + "." + TBL + "\"";
-conn.createStatement().execute(
-"CREATE VIEW " + 

[3/3] phoenix git commit: PHOENIX-4765 Add client and server side config property to enable rollback of splittable System Catalog if required

2018-11-27 Thread pboado
PHOENIX-4765 Add client and server side config property to enable rollback of 
splittable System Catalog if required


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f8836f7a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f8836f7a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f8836f7a

Branch: refs/heads/4.x-cdh5.15
Commit: f8836f7a2d12273a1bfdad96a79844d1d7db08e6
Parents: b7e6f2d
Author: Thomas D'Silva 
Authored: Tue Nov 20 20:10:05 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:21:03 2018 +

--
 .../apache/phoenix/end2end/SystemCatalogIT.java | 40 -
 .../coprocessor/MetaDataEndpointImpl.java   | 90 ++--
 .../phoenix/coprocessor/MetaDataProtocol.java   |  5 +-
 .../org/apache/phoenix/query/QueryServices.java | 17 
 .../phoenix/query/QueryServicesOptions.java |  2 +
 .../apache/phoenix/schema/MetaDataClient.java   | 26 +-
 6 files changed, 146 insertions(+), 34 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f8836f7a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java
index ae09bac..1203f3c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogIT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
@@ -31,10 +32,12 @@ import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.RegionLocator;
+import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
@@ -44,11 +47,12 @@ import com.google.common.collect.Maps;
 @Category(NeedsOwnMiniClusterTest.class)
 public class SystemCatalogIT extends BaseTest {
 private HBaseTestingUtility testUtil = null;
-
+
@BeforeClass
public static void doSetup() throws Exception {
Map serverProps = 
Maps.newHashMapWithExpectedSize(1);
serverProps.put(QueryServices.SYSTEM_CATALOG_SPLITTABLE, 
"false");
+
serverProps.put(QueryServices.ALLOW_SPLITTABLE_SYSTEM_CATALOG_ROLLBACK, "true");
Map clientProps = Collections.emptyMap();
setUpTestDriver(new 
ReadOnlyProps(serverProps.entrySet().iterator()),
new 
ReadOnlyProps(clientProps.entrySet().iterator()));
@@ -87,7 +91,8 @@ public class SystemCatalogIT extends BaseTest {
 Statement stmt = conn.createStatement();) {
 stmt.execute("DROP TABLE IF EXISTS " + tableName);
 stmt.execute("CREATE TABLE " + tableName
-+ " (TENANT_ID VARCHAR NOT NULL, PK1 VARCHAR NOT NULL, V1 
VARCHAR CONSTRAINT PK PRIMARY KEY(TENANT_ID, PK1)) MULTI_TENANT=true");
++ " (TENANT_ID VARCHAR NOT NULL, PK1 VARCHAR NOT NULL, V1 
VARCHAR CONSTRAINT PK " +
+"PRIMARY KEY(TENANT_ID, PK1)) MULTI_TENANT=true");
 try (Connection tenant1Conn = getTenantConnection("tenant1")) {
 String view1DDL = "CREATE VIEW " + tableName + "_view AS 
SELECT * FROM " + tableName;
 tenant1Conn.createStatement().execute(view1DDL);
@@ -97,7 +102,7 @@ public class SystemCatalogIT extends BaseTest {
 }
 
 private String getJdbcUrl() {
-return "jdbc:phoenix:localhost:" + 
testUtil.getZkCluster().getClientPort() + ":/hbase";
+return "jdbc:phoenix:localhost:" + 
getUtility().getZkCluster().getClientPort() + ":/hbase";
 }
 
 private Connection getTenantConnection(String tenantId) throws 
SQLException {
@@ -105,4 +110,31 @@ public class SystemCatalogIT extends BaseTest {
 tenantProps.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId);
 return DriverManager.getConnection(getJdbcUrl(), tenantProps);
 }
-}
+
+/**
+ * Ensure that we cannot add a column to a base table if 
QueryServices.BLOCK_METADATA_CHANGES_REQUIRE_PROPAGATION
+ * is true
+ */
+@Test
+public void testAddingColumnFails() throws Exception {
+ 

[2/3] phoenix git commit: PHOENIX-5031 Fix TenantSpecificViewIndexIT test failures in HBase 1.2 branch

2018-11-27 Thread pboado
PHOENIX-5031 Fix TenantSpecificViewIndexIT test failures in HBase 1.2 branch


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b7e6f2dc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b7e6f2dc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b7e6f2dc

Branch: refs/heads/4.x-cdh5.15
Commit: b7e6f2dcd034c34cabe7281bc9b60527b9c4df33
Parents: ce3c451
Author: Thomas D'Silva 
Authored: Mon Nov 26 22:48:10 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:21:01 2018 +

--
 .../org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b7e6f2dc/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
index ea8f004..a317693 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
@@ -130,8 +130,8 @@ public class TenantSpecificViewIndexIT extends 
BaseTenantSpecificViewIndexIT {
 String sequenceNameA = 
getViewIndexSequenceName(PNameFactory.newName(tableName), 
PNameFactory.newName(tenantId2), isNamespaceEnabled);
 String sequenceNameB = 
getViewIndexSequenceName(PNameFactory.newName(tableName), 
PNameFactory.newName(tenantId1), isNamespaceEnabled);
 String sequenceSchemaName = 
getViewIndexSequenceSchemaName(PNameFactory.newName(tableName), 
isNamespaceEnabled);
-verifySequenceValue(isNamespaceEnabled? tenantId2 : null, 
sequenceNameA, sequenceSchemaName, -32767);
-verifySequenceValue(isNamespaceEnabled? tenantId1 : null, 
sequenceNameB, sequenceSchemaName, -32767);
+verifySequenceValue(isNamespaceEnabled? tenantId2 : null, 
sequenceNameA, sequenceSchemaName, -9223372036854775807L);
+verifySequenceValue(isNamespaceEnabled? tenantId1 : null, 
sequenceNameB, sequenceSchemaName, -9223372036854775807L);
 
 Properties props = new Properties();
 props.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId2);



[10/28] phoenix git commit: Revert "PHOENIX-4971 Drop index will execute successfully using Incorrect name of parent tables"

2018-11-27 Thread pboado
Revert "PHOENIX-4971 Drop index will execute successfully using Incorrect name 
of parent tables"

This reverts commit 7b5482367eb010b5b2db285ff8bc4b345863c477.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1da0ad70
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1da0ad70
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1da0ad70

Branch: refs/heads/4.x-cdh5.15
Commit: 1da0ad70ee2d0c904d3d210c0f7584f03c102303
Parents: 1767244
Author: Thomas D'Silva 
Authored: Wed Nov 7 19:09:31 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:26 2018 +

--
 .../phoenix/end2end/index/DropMetadataIT.java   | 24 +---
 .../phoenix/exception/SQLExceptionCode.java |  2 --
 .../apache/phoenix/schema/MetaDataClient.java   | 15 
 3 files changed, 1 insertion(+), 40 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1da0ad70/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java
index a285526..b92ed8d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropMetadataIT.java
@@ -18,13 +18,9 @@
 package org.apache.phoenix.end2end.index;
 
 import static org.apache.phoenix.util.TestUtil.HBASE_NATIVE_SCHEMA_NAME;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.DriverManager;
-import java.sql.SQLException;
-
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.HColumnDescriptor;
@@ -33,7 +29,6 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
-import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -61,24 +56,7 @@ public class DropMetadataIT extends ParallelStatsDisabledIT {
 String url = QueryUtil.getConnectionUrl(props, config, PRINCIPAL);
 return DriverManager.getConnection(url, props);
 }
-
-@Test
-public void testDropIndexTableHasSameNameWithDataTable() {
-String tableName = generateUniqueName();
-String indexName = "IDX_" + tableName;
-try (Connection conn = DriverManager.getConnection(getUrl())) {
-String createTable = "CREATE TABLE " + tableName + "  (id varchar 
not null primary key, col integer)";
-conn.createStatement().execute(createTable);
-String createIndex = "CREATE INDEX " + indexName + " on " + 
tableName + "(col)";
-conn.createStatement().execute(createIndex);
-String dropIndex = "DROP INDEX " + indexName + " on " + indexName;
-conn.createStatement().execute(dropIndex);
-fail("should not execute successfully");
-} catch (SQLException e) {
-assertTrue(SQLExceptionCode.PARENT_TABLE_NOT_FOUND.getErrorCode() 
== e.getErrorCode());
-}
-}
-
+
 @Test
 public void testDropViewKeepsHTable() throws Exception {
 Connection conn = getConnection();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1da0ad70/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
index 5bffed5..d557714 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
@@ -185,8 +185,6 @@ public enum SQLExceptionCode {
  INVALID_REPLAY_AT(533, "42910", "Value of REPLAY_AT cannot be less than 
zero."),
  UNEQUAL_SCN_AND_BUILD_INDEX_AT(534, "42911", "If both specified, values 
of CURRENT_SCN and BUILD_INDEX_AT must be equal."),
  ONLY_INDEX_UPDATABLE_AT_SCN(535, "42912", "Only an index may be updated 
when the BUILD_INDEX_AT property is specified"),
- PARENT_TABLE_NOT_FOUND(536, "42913", "Can't drop the index because the 
parent table in the DROP statement is incorrect."),
-
  /**
  * HBase and Phoenix specific implementation defined sub-classes.
  * Column family related exceptions.


[03/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 578a3af..792d08f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -17,17 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.apache.phoenix.util.TestUtil.ROW1;
-import static org.apache.phoenix.util.TestUtil.ROW2;
-import static org.apache.phoenix.util.TestUtil.ROW3;
-import static org.apache.phoenix.util.TestUtil.ROW4;
-import static org.apache.phoenix.util.TestUtil.ROW5;
-import static org.apache.phoenix.util.TestUtil.ROW6;
-import static org.apache.phoenix.util.TestUtil.ROW7;
-import static org.apache.phoenix.util.TestUtil.ROW8;
-import static org.apache.phoenix.util.TestUtil.ROW9;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -40,83 +30,10 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
-import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
-
-public class OrderByIT extends ParallelStatsDisabledIT {
-
-@Test
-public void testMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string, entity_id";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
-
-
-@Test
-public void testDescMultiOrderByExpr() throws Exception {
-String tenantId = getOrganizationId();
-String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
-String query = "SELECT entity_id FROM " + tableName + " ORDER BY 
b_string || entity_id desc";
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement statement = conn.prepareStatement(query);
-ResultSet rs = statement.executeQuery();
-assertTrue (rs.next());
-assertEquals(ROW9,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW6,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW3,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW8,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW5,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW2,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW7,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW4,rs.getString(1));
-assertTrue (rs.next());
-assertEquals(ROW1,rs.getString(1));
-
-assertFalse(rs.next());
-} finally {
-conn.close();
-}
-}
+public class OrderByIT extends BaseOrderByIT {
 
 @Test
 public void testOrderByWithPosition() throws Exception {
@@ -151,8 +68,8 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 assertTrue(rs.next());
 assertEquals(1,rs.getInt(1));
 assertTrue(rs.next());
-assertEquals(1,rs.getInt(1));  
-assertFalse(rs.next());  
+  

[01/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.15 7f13f87c5 -> 505551251


http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
--
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
new file mode 100644
index 000..0cb8009
--- /dev/null
+++ b/phoenix-spark/src/main/java/org/apache/phoenix/spark/SparkResultSet.java
@@ -0,0 +1,1056 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.spark;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.util.SQLCloseable;
+import org.apache.spark.sql.Row;
+
+import java.io.InputStream;
+import java.io.Reader;
+import java.math.BigDecimal;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.sql.Array;
+import java.sql.Blob;
+import java.sql.Clob;
+import java.sql.Date;
+import java.sql.NClob;
+import java.sql.Ref;
+import java.sql.ResultSet;
+import java.sql.ResultSetMetaData;
+import java.sql.RowId;
+import java.sql.SQLException;
+import java.sql.SQLFeatureNotSupportedException;
+import java.sql.SQLWarning;
+import java.sql.SQLXML;
+import java.sql.Statement;
+import java.sql.Time;
+import java.sql.Timestamp;
+import java.util.Arrays;
+import java.util.Calendar;
+import java.util.List;
+import java.util.Map;
+
+/**
+ * Helper class to convert a List of Rows returned from a dataset to a sql 
ResultSet
+ */
+public class SparkResultSet implements ResultSet, SQLCloseable {
+
+private int index = -1;
+private List dataSetRows;
+private List columnNames;
+private boolean wasNull = false;
+
+public SparkResultSet(List rows, String[] columnNames) {
+this.dataSetRows = rows;
+this.columnNames = Arrays.asList(columnNames);
+}
+
+private Row getCurrentRow() {
+return dataSetRows.get(index);
+}
+
+@Override
+public boolean absolute(int row) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void afterLast() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void beforeFirst() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void cancelRowUpdates() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public void clearWarnings() throws SQLException {
+}
+
+@Override
+public void close() throws SQLException {
+}
+
+@Override
+public void deleteRow() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public int findColumn(String columnLabel) throws SQLException {
+return columnNames.indexOf(columnLabel.toUpperCase())+1;
+}
+
+@Override
+public boolean first() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public Array getArray(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public InputStream getAsciiStream(String columnLabel) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkOpen() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+private void checkCursorState() throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+@Override
+public BigDecimal getBigDecimal(int columnIndex) throws SQLException {
+throw new SQLFeatureNotSupportedException();
+}
+
+

[05/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
new file mode 100644
index 000..5b466df
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseAggregateIT.java
@@ -0,0 +1,1022 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+
+import java.io.IOException;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.List;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.schema.AmbiguousColumnException;
+import org.apache.phoenix.schema.types.PChar;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.util.ByteUtil;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.Test;
+
+
+public abstract class BaseAggregateIT extends ParallelStatsDisabledIT {
+
+private static void initData(Connection conn, String tableName) throws 
SQLException {
+conn.createStatement().execute("create table " + tableName +
+"   (id varchar not null primary key,\n" +
+"uri varchar, appcpu integer)");
+insertRow(conn, tableName, "Report1", 10, 1);
+insertRow(conn, tableName, "Report2", 10, 2);
+insertRow(conn, tableName, "Report3", 30, 3);
+insertRow(conn, tableName, "Report4", 30, 4);
+insertRow(conn, tableName, "SOQL1", 10, 5);
+insertRow(conn, tableName, "SOQL2", 10, 6);
+insertRow(conn, tableName, "SOQL3", 30, 7);
+insertRow(conn, tableName, "SOQL4", 30, 8);
+conn.commit();
+}
+
+private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(id, uri, appcpu) values (?,?,?)");
+statement.setString(1, "id" + id);
+statement.setString(2, uri);
+statement.setInt(3, appcpu);
+statement.executeUpdate();
+}
+
+@Test
+public void testDuplicateTrailingAggExpr() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.FALSE.toString());
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+
+conn.createStatement().execute("create table " + tableName +
+"   (nam VARCHAR(20), address VARCHAR(20), id BIGINT "
++ "constraint my_pk primary key (id))");
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " + 
tableName + "(nam, address, id) values (?,?,?)");
+statement.setString(1, "pulkit");
+statement.setString(2, "badaun");
+statement.setInt(3, 1);
+statement.executeUpdate();
+conn.commit();
+
+QueryBuilder queryBuilder = new QueryBuilder()
+.setDistinct(true)
+.setSelectExpression("'harshit' as 

[06/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using 
phoenix-spark


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/678563f5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/678563f5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/678563f5

Branch: refs/heads/4.x-cdh5.15
Commit: 678563f5dc1fbaa37ef890ab135fb301dcf20ad6
Parents: 7f13f87
Author: Thomas D'Silva 
Authored: Fri Oct 19 06:00:01 2018 +0100
Committer: pboado 
Committed: Mon Nov 26 10:52:48 2018 +

--
 .../org/apache/phoenix/end2end/AggregateIT.java |  987 +---
 .../apache/phoenix/end2end/BaseAggregateIT.java | 1022 +
 .../apache/phoenix/end2end/BaseOrderByIT.java   |  940 
 .../org/apache/phoenix/end2end/OrderByIT.java   |  943 ++--
 .../end2end/ParallelStatsDisabledIT.java|   40 +
 .../end2end/salted/BaseSaltedTableIT.java   |  474 
 .../phoenix/end2end/salted/SaltedTableIT.java   |  450 +---
 .../org/apache/phoenix/util/QueryBuilder.java   |  211 
 .../java/org/apache/phoenix/util/QueryUtil.java |   38 +-
 .../index/IndexScrutinyTableOutputTest.java |6 +-
 .../util/PhoenixConfigurationUtilTest.java  |6 +-
 .../org/apache/phoenix/util/QueryUtilTest.java  |   10 +-
 phoenix-spark/pom.xml   |8 +
 .../org/apache/phoenix/spark/AggregateIT.java   |   91 ++
 .../org/apache/phoenix/spark/OrderByIT.java |  460 
 .../org/apache/phoenix/spark/SaltedTableIT.java |   53 +
 .../org/apache/phoenix/spark/SparkUtil.java |   87 ++
 .../apache/phoenix/spark/PhoenixSparkIT.scala   |9 +-
 .../apache/phoenix/spark/SparkResultSet.java| 1056 ++
 .../org/apache/phoenix/spark/PhoenixRDD.scala   |   27 +-
 20 files changed, 4649 insertions(+), 2269 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
index 2059311..8916d4d 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AggregateIT.java
@@ -18,506 +18,28 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
-import static org.apache.phoenix.util.TestUtil.assertResultSet;
 
-import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.List;
 import java.util.Properties;
 
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.phoenix.compile.QueryPlan;
-import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
-import org.apache.phoenix.jdbc.PhoenixStatement;
-import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.schema.AmbiguousColumnException;
-import org.apache.phoenix.schema.types.PChar;
-import org.apache.phoenix.schema.types.PInteger;
-import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.Test;
 
+public class AggregateIT extends BaseAggregateIT {
 
-public class AggregateIT extends ParallelStatsDisabledIT {
-private static void initData(Connection conn, String tableName) throws 
SQLException {
-conn.createStatement().execute("create table " + tableName +
-"   (id varchar not null primary key,\n" +
-"uri varchar, appcpu integer)");
-insertRow(conn, tableName, "Report1", 10, 1);
-insertRow(conn, tableName, "Report2", 10, 2);
-insertRow(conn, tableName, "Report3", 30, 3);
-insertRow(conn, tableName, "Report4", 30, 4);
-insertRow(conn, tableName, "SOQL1", 10, 5);
-insertRow(conn, tableName, "SOQL2", 10, 6);
-insertRow(conn, tableName, "SOQL3", 30, 7);
-insertRow(conn, tableName, "SOQL4", 30, 8);
-conn.commit();
-}
-
-private static void insertRow(Connection conn, String tableName, String 
uri, int appcpu, int id) throws SQLException {
-PreparedStatement statement = 

[21/28] phoenix git commit: PHOENIX-4955 - PhoenixIndexImportDirectMapper undercounts failed records

2018-11-27 Thread pboado
PHOENIX-4955 - PhoenixIndexImportDirectMapper undercounts failed records


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/dd81989f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/dd81989f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/dd81989f

Branch: refs/heads/4.x-cdh5.15
Commit: dd81989fab80cb283678218ada0c0359930731c8
Parents: 590f88b
Author: Geoffrey Jacoby 
Authored: Fri Nov 16 21:57:45 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:05 2018 +

--
 .../mapreduce/index/PhoenixIndexImportDirectMapper.java  | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/dd81989f/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
index eb4bc0e..e2ac491 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
@@ -68,6 +68,8 @@ public class PhoenixIndexImportDirectMapper extends
 private long batchSizeBytes;
 
 private MutationState mutationState;
+private int currentBatchCount = 0;
+
 
 @Override
 protected void setup(final Context context) throws IOException, 
InterruptedException {
@@ -113,6 +115,7 @@ public class PhoenixIndexImportDirectMapper extends
 throws IOException, InterruptedException {
 
 try {
+currentBatchCount++;
 final List values = record.getValues();
 indxWritable.setValues(values);
 indxWritable.write(this.pStatement);
@@ -125,9 +128,8 @@ public class PhoenixIndexImportDirectMapper extends
 }
 // Keep accumulating Mutations till batch size
 mutationState.join(currentMutationState);
-
 // Write Mutation Batch
-if 
(context.getCounter(PhoenixJobCounters.INPUT_RECORDS).getValue() % batchSize == 
0) {
+if (currentBatchCount % batchSize == 0) {
 writeBatch(mutationState, context);
 mutationState = null;
 }
@@ -136,7 +138,7 @@ public class PhoenixIndexImportDirectMapper extends
 context.progress();
 } catch (SQLException e) {
 LOG.error(" Error {}  while read/write of a record ", 
e.getMessage());
-context.getCounter(PhoenixJobCounters.FAILED_RECORDS).increment(1);
+
context.getCounter(PhoenixJobCounters.FAILED_RECORDS).increment(currentBatchCount);
 throw new RuntimeException(e);
 }
 context.getCounter(PhoenixJobCounters.INPUT_RECORDS).increment(1);
@@ -157,6 +159,7 @@ public class PhoenixIndexImportDirectMapper extends
 mutationPair.getSecond().size());
 }
 connection.rollback();
+currentBatchCount = 0;
 }
 
 @Override
@@ -173,7 +176,7 @@ public class PhoenixIndexImportDirectMapper extends
 super.cleanup(context);
 } catch (SQLException e) {
 LOG.error(" Error {}  while read/write of a record ", 
e.getMessage());
-context.getCounter(PhoenixJobCounters.FAILED_RECORDS).increment(1);
+
context.getCounter(PhoenixJobCounters.FAILED_RECORDS).increment(currentBatchCount);
 throw new RuntimeException(e);
 } finally {
 if (connection != null) {



[13/28] phoenix git commit: PHOENIX-5010 Don't build client guidepost cache when phoenix.stats.collection.enabled is disabled

2018-11-27 Thread pboado
PHOENIX-5010 Don't build client guidepost cache when 
phoenix.stats.collection.enabled is disabled


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/21c3a7c2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/21c3a7c2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/21c3a7c2

Branch: refs/heads/4.x-cdh5.15
Commit: 21c3a7c2e9cd4d4f59623dd987c6602304ac9335
Parents: a0e9859
Author: Ankit Singhal 
Authored: Tue Nov 13 19:36:26 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:41 2018 +

--
 .../org/apache/phoenix/query/GuidePostsCache.java | 18 +-
 1 file changed, 17 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/21c3a7c2/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
index d27be1b..1d9fa36 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
@@ -16,6 +16,10 @@
  */
 package org.apache.phoenix.query;
 
+import static org.apache.phoenix.query.QueryServices.STATS_COLLECTION_ENABLED;
+import static org.apache.phoenix.query.QueryServices.STATS_ENABLED_ATTRIB;
+import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_STATS_COLLECTION_ENABLED;
+
 import java.io.IOException;
 import java.util.List;
 import java.util.Objects;
@@ -66,6 +70,8 @@ public class GuidePostsCache {
 final long maxTableStatsCacheSize = config.getLong(
 QueryServices.STATS_MAX_CACHE_SIZE,
 QueryServicesOptions.DEFAULT_STATS_MAX_CACHE_SIZE);
+   final boolean isStatsEnabled = 
config.getBoolean(STATS_COLLECTION_ENABLED, DEFAULT_STATS_COLLECTION_ENABLED)
+   && config.getBoolean(STATS_ENABLED_ATTRIB, 
true);
 cache = CacheBuilder.newBuilder()
 // Expire entries a given amount of time after they were 
written
 .expireAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
@@ -80,7 +86,7 @@ public class GuidePostsCache {
 // Log removals at TRACE for debugging
 .removalListener(new PhoenixStatsCacheRemovalListener())
 // Automatically load the cache when entries are missing
-.build(new StatsLoader());
+.build(isStatsEnabled ? new StatsLoader() : new 
EmptyStatsLoader());
 }
 
 /**
@@ -129,6 +135,16 @@ public class GuidePostsCache {
 }
 
 /**
+ * Empty stats loader if stats are disabled
+ */
+   protected class EmptyStatsLoader extends CacheLoader {
+   @Override
+   public GuidePostsInfo load(GuidePostsKey statsKey) throws 
Exception {
+   return GuidePostsInfo.NO_GUIDEPOST;
+   }
+   }
+
+/**
  * Returns the underlying cache. Try to use the provided methods instead 
of accessing the cache
  * directly.
  */



[11/28] phoenix git commit: PHOENIX-5012 Don't derive IndexToolIT from ParallelStatsEnabled

2018-11-27 Thread pboado
PHOENIX-5012 Don't derive IndexToolIT from ParallelStatsEnabled


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b296ddc1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b296ddc1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b296ddc1

Branch: refs/heads/4.x-cdh5.15
Commit: b296ddc19a1533e105e01597a3b761a37922d261
Parents: 1da0ad7
Author: James Taylor 
Authored: Sat Nov 10 19:04:48 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:11:36 2018 +

--
 .../src/it/java/org/apache/phoenix/end2end/IndexToolIT.java  | 8 +---
 1 file changed, 5 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b296ddc1/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
index c99f145..e096bb5 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
@@ -58,7 +58,6 @@ import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.TestUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
-import org.junit.experimental.categories.Category;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
@@ -67,8 +66,7 @@ import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
 
 @RunWith(Parameterized.class)
-@Category(NeedsOwnMiniClusterTest.class)
-public class IndexToolIT extends ParallelStatsEnabledIT {
+public class IndexToolIT extends BaseUniqueNamesOwnClusterIT {
 
 private final boolean localIndex;
 private final boolean transactional;
@@ -99,9 +97,13 @@ public class IndexToolIT extends ParallelStatsEnabledIT {
 @BeforeClass
 public static void setup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(2);
+serverProps.put(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB, 
Long.toString(20));
+
serverProps.put(QueryServices.MAX_SERVER_METADATA_CACHE_TIME_TO_LIVE_MS_ATTRIB, 
Long.toString(5));
 serverProps.put(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB,
 QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
 Map clientProps = Maps.newHashMapWithExpectedSize(2);
+clientProps.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
+clientProps.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, 
Long.toString(5));
 clientProps.put(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
 clientProps.put(QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB, 
Boolean.TRUE.toString());
 setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()),



[04/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
new file mode 100644
index 000..31bf050
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseOrderByIT.java
@@ -0,0 +1,940 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.ROW1;
+import static org.apache.phoenix.util.TestUtil.ROW2;
+import static org.apache.phoenix.util.TestUtil.ROW3;
+import static org.apache.phoenix.util.TestUtil.ROW4;
+import static org.apache.phoenix.util.TestUtil.ROW5;
+import static org.apache.phoenix.util.TestUtil.ROW6;
+import static org.apache.phoenix.util.TestUtil.ROW7;
+import static org.apache.phoenix.util.TestUtil.ROW8;
+import static org.apache.phoenix.util.TestUtil.ROW9;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.apache.phoenix.util.TestUtil.assertResultSet;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Properties;
+
+import com.google.common.collect.Lists;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryBuilder;
+import org.junit.Test;
+
+
+public abstract class BaseOrderByIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING, ENTITY_ID");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW1,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW4,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW7,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW2,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW5,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW8,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+
+assertFalse(rs.next());
+}
+}
+
+
+@Test
+public void testDescMultiOrderByExpr() throws Exception {
+String tenantId = getOrganizationId();
+String tableName = initATableValues(tenantId, 
getDefaultSplits(tenantId), getUrl());
+QueryBuilder queryBuilder = new QueryBuilder()
+.setSelectColumns(
+Lists.newArrayList("ENTITY_ID", "B_STRING"))
+.setFullTableName(tableName)
+.setOrderByClause("B_STRING || ENTITY_ID DESC");
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+ResultSet rs = executeQuery(conn, queryBuilder);
+assertTrue (rs.next());
+assertEquals(ROW9,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW6,rs.getString(1));
+assertTrue (rs.next());
+assertEquals(ROW3,rs.getString(1));
+assertTrue 

[02/28] phoenix git commit: PHOENIX-4981 Add tests for ORDER BY, GROUP BY and salted tables using phoenix-spark

2018-11-27 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/678563f5/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
index c9168f1..69c9869 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableIT.java
@@ -37,104 +37,18 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 
-
 /**
  * Tests for table with transparent salting.
  */
 
-public class SaltedTableIT extends ParallelStatsDisabledIT {
-
-   private static String getUniqueTableName() {
-   return SchemaUtil.getTableName(generateUniqueName(), 
generateUniqueName());
-   }
-   
-private static String initTableValues(byte[][] splits) throws Exception {
-   String tableName = getUniqueTableName();
-Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-Connection conn = DriverManager.getConnection(getUrl(), props);
-
-// Rows we inserted:
-// 1ab123abc111
-// 1abc456abc111
-// 1de123abc111
-// 2abc123def222 
-// 3abc123ghi333
-// 4abc123jkl444
-try {
-// Upsert with no column specifies.
-ensureTableCreated(getUrl(), tableName, TABLE_WITH_SALTING, 
splits, null, null);
-String query = "UPSERT INTO " + tableName + " VALUES(?,?,?,?,?)";
-PreparedStatement stmt = conn.prepareStatement(query);
-stmt.setInt(1, 1);
-stmt.setString(2, "ab");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 1);
-stmt.setString(2, "abc");
-stmt.setString(3, "456");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-// Test upsert when statement explicitly specifies the columns to 
upsert into.
-query = "UPSERT INTO " + tableName +
-" (a_integer, a_string, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-
-stmt.setInt(1, 1);
-stmt.setString(2, "de");
-stmt.setString(3, "123");
-stmt.setString(4, "abc");
-stmt.setInt(5, 111);
-stmt.execute();
-conn.commit();
-
-stmt.setInt(1, 2);
-stmt.setString(2, "abc");
-stmt.setString(3, "123");
-stmt.setString(4, "def");
-stmt.setInt(5, 222);
-stmt.execute();
-conn.commit();
-
-// Test upsert when order of column is shuffled.
-query = "UPSERT INTO " + tableName +
-" (a_string, a_integer, a_id, b_string, b_integer) " + 
-" VALUES(?,?,?,?,?)";
-stmt = conn.prepareStatement(query);
-stmt.setString(1, "abc");
-stmt.setInt(2, 3);
-stmt.setString(3, "123");
-stmt.setString(4, "ghi");
-stmt.setInt(5, 333);
-stmt.execute();
-conn.commit();
-
-stmt.setString(1, "abc");
-stmt.setInt(2, 4);
-stmt.setString(3, "123");
-stmt.setString(4, "jkl");
-stmt.setInt(5, 444);
-stmt.execute();
-conn.commit();
-} finally {
-conn.close();
-}
-return tableName;
-}
+public class SaltedTableIT extends BaseSaltedTableIT {
 
 @Test
 public void testTableWithInvalidBucketNumber() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 try {
-String query = "create table " + getUniqueTableName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
+String query = "create table " + generateUniqueName() + " 
(a_integer integer not null CONSTRAINT pk PRIMARY KEY (a_integer)) SALT_BUCKETS 
= 257";
 PreparedStatement stmt = conn.prepareStatement(query);
 stmt.execute();
 fail("Should have caught exception");
@@ -148,370 +62,12 @@ public class SaltedTableIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testTableWithSplit() throws Exception {
 try {
-createTestTable(getUrl(), "create table " + 

[08/28] phoenix git commit: PHOENIX-4996: Refactor PTableImpl to use Builder Pattern

2018-11-27 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/1767244a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 9f06e04..7939b97 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -36,6 +36,7 @@ import java.util.Map.Entry;
 
 import javax.annotation.Nonnull;
 
+import com.google.common.annotations.VisibleForTesting;
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.client.Delete;
@@ -69,7 +70,6 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDataType;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PFloat;
-import org.apache.phoenix.schema.types.PLong;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.transaction.TransactionFactory;
 import org.apache.phoenix.util.ByteUtil;
@@ -102,164 +102,661 @@ import com.google.common.collect.Maps;
 public class PTableImpl implements PTable {
 private static final Integer NO_SALTING = -1;
 
-private PTableKey key;
-private PName name;
-private PName schemaName = PName.EMPTY_NAME;
-private PName tableName = PName.EMPTY_NAME;
-private PName tenantId;
-private PTableType type;
-private PIndexState state;
-private long sequenceNumber;
-private long timeStamp;
-private long indexDisableTimestamp;
+private IndexMaintainer indexMaintainer;
+private ImmutableBytesWritable indexMaintainersPtr;
+
+private final PTableKey key;
+private final PName name;
+private final PName schemaName;
+private final PName tableName;
+private final PName tenantId;
+private final PTableType type;
+private final PIndexState state;
+private final long sequenceNumber;
+private final long timeStamp;
+private final long indexDisableTimestamp;
 // Have MultiMap for String->PColumn (may need family qualifier)
-private List pkColumns;
-private List allColumns;
+private final List pkColumns;
+private final List allColumns;
 // columns that were inherited from a parent table but that were dropped 
in the view
-private List excludedColumns;
-private List families;
-private Map familyByBytes;
-private Map familyByString;
-private ListMultimap columnsByName;
-private Map kvColumnsByQualifiers;
-private PName pkName;
-private Integer bucketNum;
-private RowKeySchema rowKeySchema;
+private final List excludedColumns;
+private final List families;
+private final Map familyByBytes;
+private final Map familyByString;
+private final ListMultimap columnsByName;
+private final Map kvColumnsByQualifiers;
+private final PName pkName;
+private final Integer bucketNum;
+private final RowKeySchema rowKeySchema;
 // Indexes associated with this table.
-private List indexes;
+private final List indexes;
 // Data table name that the index is created on.
-private PName parentName;
-private PName parentSchemaName;
-private PName parentTableName;
-private List physicalNames;
-private boolean isImmutableRows;
-private IndexMaintainer indexMaintainer;
-private ImmutableBytesWritable indexMaintainersPtr;
-private PName defaultFamilyName;
-private String viewStatement;
-private boolean disableWAL;
-private boolean multiTenant;
-private boolean storeNulls;
-private TransactionFactory.Provider transactionProvider;
-private ViewType viewType;
-private PDataType viewIndexType;
-private Long viewIndexId;
-private int estimatedSize;
-private IndexType indexType;
-private int baseColumnCount;
-private boolean rowKeyOrderOptimizable; // TODO: remove when required that 
tables have been upgrade for PHOENIX-2067
-private boolean hasColumnsRequiringUpgrade; // TODO: remove when required 
that tables have been upgrade for PHOENIX-2067
-private int rowTimestampColPos;
-private long updateCacheFrequency;
-private boolean isNamespaceMapped;
-private String autoPartitionSeqName;
-private boolean isAppendOnlySchema;
-private ImmutableStorageScheme immutableStorageScheme;
-private QualifierEncodingScheme qualifierEncodingScheme;
-private EncodedCQCounter encodedCQCounter;
-private Boolean useStatsForParallelization;
-
-public PTableImpl() {
-this.indexes = Collections.emptyList();
-this.physicalNames = Collections.emptyList();
-this.rowKeySchema = RowKeySchema.EMPTY_SCHEMA;
-}
-
-// Constructor used at table creation time
-public PTableImpl(PName tenantId, String 

[22/28] phoenix git commit: PHOENIX-5005 Server-side delete / upsert-select potentially blocked after a split

2018-11-27 Thread pboado
PHOENIX-5005 Server-side delete / upsert-select potentially blocked after a 
split


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/b20b21d1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/b20b21d1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/b20b21d1

Branch: refs/heads/4.x-cdh5.15
Commit: b20b21d101bf95e42c21350b778ebd5352be37f8
Parents: dd81989
Author: Vincent Poon 
Authored: Thu Nov 8 23:38:20 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:08 2018 +

--
 .../UngroupedAggregateRegionObserver.java   | 43 
 1 file changed, 26 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/b20b21d1/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 73386a2..26e338f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -262,7 +262,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
   // flush happen which decrease the memstore size and then writes allowed 
on the region.
   for (int i = 0; blockingMemstoreSize > 0 && region.getMemstoreSize() > 
blockingMemstoreSize && i < 30; i++) {
   try {
-  checkForRegionClosing();
+  checkForRegionClosingOrSplitting();
   Thread.sleep(100);
   } catch (InterruptedException e) {
   Thread.currentThread().interrupt();
@@ -311,7 +311,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
  * a high chance that flush might not proceed and memstore won't be freed 
up.
  * @throws IOException
  */
-private void checkForRegionClosing() throws IOException {
+private void checkForRegionClosingOrSplitting() throws IOException {
 synchronized (lock) {
 if(isRegionClosingOrSplitting) {
 lock.notifyAll();
@@ -1333,13 +1333,31 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 @Override
 public void preSplit(ObserverContext c, 
byte[] splitRow)
 throws IOException {
-// Don't allow splitting if operations need read and write to same 
region are going on in the
-// the coprocessors to avoid dead lock scenario. See PHOENIX-3111.
+waitForScansToFinish(c);
+}
+
+// Don't allow splitting/closing if operations need read and write to same 
region are going on in the
+// the coprocessors to avoid dead lock scenario. See PHOENIX-3111.
+private void 
waitForScansToFinish(ObserverContext c) throws 
IOException {
+int maxWaitTime = 
c.getEnvironment().getConfiguration().getInt(HConstants.HBASE_CLIENT_OPERATION_TIMEOUT,
+HConstants.DEFAULT_HBASE_CLIENT_OPERATION_TIMEOUT);
+long start = EnvironmentEdgeManager.currentTimeMillis();
 synchronized (lock) {
 isRegionClosingOrSplitting = true;
-if (scansReferenceCount > 0) {
-throw new IOException("Operations like local index 
building/delete/upsert select"
-+ " might be going on so not allowing to split.");
+while (scansReferenceCount > 0) {
+try {
+lock.wait(1000);
+if (EnvironmentEdgeManager.currentTimeMillis() - start >= 
maxWaitTime) {
+isRegionClosingOrSplitting = false; // must reset in 
case split is not retried
+throw new IOException(String.format(
+"Operations like local index 
building/delete/upsert select"
++ " might be going on so not allowing to 
split/close. scansReferenceCount=%s region=%s",
+scansReferenceCount,
+
c.getEnvironment().getRegionInfo().getRegionNameAsString()));
+}
+} catch (InterruptedException e) {
+Thread.currentThread().interrupt();
+}
 }
 }
 }
@@ -1360,16 +1378,7 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 @Override
 public void preClose(ObserverContext c, 
boolean abortRequested)
 throws IOException {
-synchronized (lock) {
-

[28/28] phoenix git commit: PHOENIX-5026; another test addendum.

2018-11-27 Thread pboado
PHOENIX-5026; another test addendum.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/50555125
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/50555125
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/50555125

Branch: refs/heads/4.x-cdh5.15
Commit: 5055512515c7c5cf3dc359a50c0b5bb07398f4aa
Parents: 027d21e
Author: Lars Hofhansl 
Authored: Sun Nov 25 00:23:38 2018 +
Committer: Pedro Boado 
Committed: Tue Nov 27 15:12:24 2018 +

--
 .../phoenix/end2end/UpsertSelectAutoCommitIT.java | 14 ++
 1 file changed, 6 insertions(+), 8 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/50555125/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
index 6fad376..4078578 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectAutoCommitIT.java
@@ -175,16 +175,16 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 props.setProperty(QueryServices.ENABLE_SERVER_SIDE_MUTATIONS, 
allowServerSideMutations);
 Connection conn = DriverManager.getConnection(getUrl(), props);
 conn.setAutoCommit(true);
-conn.createStatement().execute("CREATE SEQUENCE keys CACHE 1000");
 String tableName = generateUniqueName();
+conn.createStatement().execute("CREATE SEQUENCE " + tableName + "_seq 
CACHE 1000");
 conn.createStatement().execute("CREATE TABLE " + tableName
 + " (pk INTEGER PRIMARY KEY, val INTEGER) 
UPDATE_CACHE_FREQUENCY=360");
 
 conn.createStatement().execute(
-"UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR keys,1)");
+"UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR "+ tableName 
+ "_seq,1)");
 PreparedStatement stmt =
 conn.prepareStatement("UPSERT INTO " + tableName
-+ " SELECT NEXT VALUE FOR keys, val FROM " + 
tableName);
++ " SELECT NEXT VALUE FOR "+ tableName + "_seq, val 
FROM " + tableName);
 HBaseAdmin admin =
 driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
 for (int i=0; i<12; i++) {
@@ -192,8 +192,6 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 int upsertCount = stmt.executeUpdate();
 assertEquals((int)Math.pow(2, i), upsertCount);
 }
-// cleanup after ourselves
-conn.createStatement().execute("DROP SEQUENCE keys");
 admin.close();
 conn.close();
 }
@@ -234,17 +232,17 @@ public class UpsertSelectAutoCommitIT extends 
ParallelStatsDisabledIT {
 conn.setAutoCommit(false);
 String tableName = generateUniqueName();
 
-conn.createStatement().execute("CREATE SEQUENCE "+ tableName);
+conn.createStatement().execute("CREATE SEQUENCE "+ tableName + "_seq");
 conn.createStatement().execute(
 "CREATE TABLE " + tableName + " (pk INTEGER PRIMARY KEY, val 
INTEGER)");
 
 conn.createStatement().execute(
-"UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR 
keys,1)");
+"UPSERT INTO " + tableName + " VALUES (NEXT VALUE FOR "+ 
tableName + "_seq, 1)");
 conn.commit();
 for (int i=0; i<6; i++) {
 Statement stmt = conn.createStatement();
 int upsertCount = stmt.executeUpdate(
-"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR keys, 
val FROM "
+"UPSERT INTO " + tableName + " SELECT NEXT VALUE FOR "+ 
tableName + "_seq, val FROM "
 + tableName);
 conn.commit();
 assertEquals((int)Math.pow(2, i), upsertCount);



Build failed in Jenkins: Phoenix Compile Compatibility with HBase #830

2018-11-27 Thread Apache Jenkins Server
See 


--
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

[Phoenix_Compile_Compat_wHBase] $ /bin/bash /tmp/jenkins6239224626389776551.sh
core file size  (blocks, -c) 0
data seg size   (kbytes, -d) unlimited
scheduling priority (-e) 0
file size   (blocks, -f) unlimited
pending signals (-i) 386407
max locked memory   (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files  (-n) 6
pipe size(512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority  (-r) 0
stack size  (kbytes, -s) 8192
cpu time   (seconds, -t) unlimited
max user processes  (-u) 10240
virtual memory  (kbytes, -v) unlimited
file locks  (-x) unlimited
core id : 0
core id : 1
core id : 2
core id : 3
core id : 4
core id : 5
physical id : 0
physical id : 1
MemTotal:   98957636 kB
MemFree:4294 kB
Filesystem  Size  Used Avail Use% Mounted on
udev 48G 0   48G   0% /dev
tmpfs   9.5G  338M  9.2G   4% /run
/dev/sda3   3.6T  219G  3.2T   7% /
tmpfs48G  644K   48G   1% /dev/shm
tmpfs   5.0M 0  5.0M   0% /run/lock
tmpfs48G 0   48G   0% /sys/fs/cgroup
/dev/sda2   473M  100M  349M  23% /boot
/dev/loop0   88M   88M 0 100% /snap/core/5662
/dev/loop1   28M   28M 0 100% /snap/snapcraft/1871
/dev/loop3   88M   88M 0 100% /snap/core/5742
tmpfs   9.5G  4.0K  9.5G   1% /run/user/910
/dev/loop6   52M   52M 0 100% /snap/lxd/9564
/dev/loop4   52M   52M 0 100% /snap/lxd/9600
/dev/loop5   89M   89M 0 100% /snap/core/5897
/dev/loop7   52M   52M 0 100% /snap/lxd/9664
apache-maven-2.2.1
apache-maven-3.0.4
apache-maven-3.0.5
apache-maven-3.1.1
apache-maven-3.2.1
apache-maven-3.2.5
apache-maven-3.3.3
apache-maven-3.3.9
apache-maven-3.5.0
apache-maven-3.5.2
apache-maven-3.5.4
latest
latest2
latest3


===
Verifying compile level compatibility with HBase 0.98 with Phoenix 
4.x-HBase-0.98
===

Cloning into 'hbase'...
Switched to a new branch '0.98'
Branch 0.98 set up to track remote branch 0.98 from origin.
[ERROR] Plugin org.codehaus.mojo:findbugs-maven-plugin:2.5.2 or one of its 
dependencies could not be resolved: Failed to read artifact descriptor for 
org.codehaus.mojo:findbugs-maven-plugin:jar:2.5.2: Could not transfer artifact 
org.codehaus.mojo:findbugs-maven-plugin:pom:2.5.2 from/to central 
(https://repo.maven.apache.org/maven2): Received fatal alert: protocol_version 
-> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/PluginResolutionException
Build step 'Execute shell' marked build as failure