Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #167

2018-07-19 Thread Apache Jenkins Server
See 

--
[...truncated 107.12 KB...]
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 362.649 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 630.671 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 27.239 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.932 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.393 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.466 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 382.397 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 271.285 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.368 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.124 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.474 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 40.961 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.317 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 104.366 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 818.226 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 29.5 s - 
in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 109.908 
s - in org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 412.4 s 
- in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 445.748 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   HashJoinMoreIT.testBug2961:898 ยป IllegalArgument 6 > 5
[INFO] 
[ERROR] Tests run: 3347, Failures: 0, Errors: 1, Skipped: 1
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test (HBaseManagedTimeTests) 
@ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] 
[INFO] Results:
[INFO] 
[INFO] Tests run: 0, Failures: 0, Errors: 0, Skipped: 0
[INFO] 
[INFO] 
[INFO] --- maven-failsafe-plugin:2.20:integration-test 
(NeedTheirOwnClusterTests) @ phoenix-core ---
[INFO] 
[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running org.apache.phoenix.end2end.ChangePermissionsIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.98 s 
- in 
org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT
[INFO] Running 

Build failed in Jenkins: Phoenix | Master #2062

2018-07-19 Thread Apache Jenkins Server
See 

--
[...truncated 86.79 KB...]
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.566 
s - in org.apache.phoenix.end2end.LastValueFunctionIT
[INFO] Running org.apache.phoenix.end2end.LikeExpressionIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 78.276 
s - in org.apache.phoenix.end2end.LastValuesFunctionIT
[INFO] Running org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.953 s 
- in org.apache.phoenix.end2end.LnLogFunctionEnd2EndIT
[INFO] Running org.apache.phoenix.end2end.MD5FunctionIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.186 s 
- in org.apache.phoenix.end2end.MD5FunctionIT
[INFO] Running org.apache.phoenix.end2end.MapReduceIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 71.24 s 
- in org.apache.phoenix.end2end.LikeExpressionIT
[INFO] Running org.apache.phoenix.end2end.MappingTableDataTypeIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.451 s 
- in org.apache.phoenix.end2end.MappingTableDataTypeIT
[INFO] Running org.apache.phoenix.end2end.MetaDataEndPointIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.188 s 
- in org.apache.phoenix.end2end.MapReduceIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.841 s 
- in org.apache.phoenix.end2end.MetaDataEndPointIT
[INFO] Running org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.423 s 
- in org.apache.phoenix.end2end.MinMaxAggregateFunctionIT
[INFO] Running org.apache.phoenix.end2end.MutationStateIT
[INFO] Running org.apache.phoenix.end2end.ModulusExpressionIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.491 s 
- in org.apache.phoenix.end2end.MutationStateIT
[INFO] Running org.apache.phoenix.end2end.NamespaceSchemaMappingIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.226 s 
- in org.apache.phoenix.end2end.NamespaceSchemaMappingIT
[INFO] Running org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.892 
s - in org.apache.phoenix.end2end.ModulusExpressionIT
[INFO] Running org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 77.788 s 
- in org.apache.phoenix.end2end.NativeHBaseTypesIT
[INFO] Running org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
[INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
1,253.554 s - in org.apache.phoenix.end2end.InListIT
[INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 329.528 
s - in org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
[INFO] Running org.apache.phoenix.end2end.NthValueFunctionIT
[INFO] Running org.apache.phoenix.end2end.NullIT
[INFO] Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 59.975 
s - in org.apache.phoenix.end2end.NthValueFunctionIT
[INFO] Running org.apache.phoenix.end2end.NumericArithmeticIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 390.009 
s - in org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 91.801 
s - in org.apache.phoenix.end2end.NumericArithmeticIT
[INFO] Running org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.416 s 
- in org.apache.phoenix.end2end.OctetLengthFunctionEnd2EndIT
[INFO] Running org.apache.phoenix.end2end.OrderByIT
[INFO] Running org.apache.phoenix.end2end.OnDuplicateKeyIT
[INFO] Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 203.357 
s - in org.apache.phoenix.end2end.OrderByIT
[INFO] Running org.apache.phoenix.end2end.PartialScannerResultsDisabledIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 37.089 s 
- in org.apache.phoenix.end2end.PartialScannerResultsDisabledIT
[INFO] Running org.apache.phoenix.end2end.PercentileIT
[INFO] Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.885 
s - in org.apache.phoenix.end2end.PercentileIT
[INFO] Running org.apache.phoenix.end2end.PhoenixRuntimeIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 41.491 s 
- in org.apache.phoenix.end2end.PhoenixRuntimeIT
[INFO] Running org.apache.phoenix.end2end.PointInTimeQueryIT
[INFO] Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 429.983 
s - in org.apache.phoenix.end2end.OnDuplicateKeyIT
[INFO] Running org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.924 s 
- in org.apache.phoenix.end2end.PowerFunctionEnd2EndIT

Build failed in Jenkins: Phoenix-5.x-HBase-2.0 #11

2018-07-19 Thread Apache Jenkins Server
See 


Changes:

[tdsilva] PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva

--
[...truncated 2.12 MB...]
[INFO] Excluding org.apache.hbase:hbase-http:jar:2.0.0 from the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-procedure:jar:2.0.0 from the shaded jar.
[INFO] Excluding org.glassfish.web:javax.servlet.jsp:jar:2.3.2 from the shaded 
jar.
[INFO] Excluding org.glassfish:javax.el:jar:3.0.1-b11-SNAPSHOT from the shaded 
jar.
[INFO] Excluding javax.servlet.jsp:javax.servlet.jsp-api:jar:2.3.1 from the 
shaded jar.
[INFO] Excluding org.codehaus.jettison:jettison:jar:1.3.8 from the shaded jar.
[INFO] Excluding org.jamon:jamon-runtime:jar:2.4.1 from the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-distcp:jar:2.7.4 from the shaded jar.
[INFO] Including org.eclipse.jetty:jetty-servlet:jar:9.3.19.v20170502 in the 
shaded jar.
[INFO] Including org.eclipse.jetty:jetty-webapp:jar:9.3.19.v20170502 in the 
shaded jar.
[INFO] Including org.eclipse.jetty:jetty-xml:jar:9.3.19.v20170502 in the shaded 
jar.
[INFO] Excluding org.apache.hbase:hbase-hadoop-compat:jar:2.0.0 from the shaded 
jar.
[INFO] Excluding org.apache.hbase:hbase-hadoop2-compat:jar:2.0.0 from the 
shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-annotations:jar:3.0.0 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-mapreduce-client-core:jar:3.0.0 from 
the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-client:jar:3.0.0 from the shaded 
jar.
[INFO] Excluding org.apache.hadoop:hadoop-yarn-common:jar:3.0.0 from the shaded 
jar.
[INFO] Excluding javax.xml.bind:jaxb-api:jar:2.2.11 from the shaded jar.
[INFO] Excluding com.sun.jersey:jersey-client:jar:1.19 from the shaded jar.
[INFO] Excluding com.sun.jersey.contribs:jersey-guice:jar:1.19 from the shaded 
jar.
[INFO] Excluding 
com.fasterxml.jackson.module:jackson-module-jaxb-annotations:jar:2.7.8 from the 
shaded jar.
[INFO] Excluding 
com.fasterxml.jackson.jaxrs:jackson-jaxrs-json-provider:jar:2.7.8 from the 
shaded jar.
[INFO] Excluding com.fasterxml.jackson.jaxrs:jackson-jaxrs-base:jar:2.7.8 from 
the shaded jar.
[INFO] Excluding org.apache.hadoop:hadoop-hdfs-client:jar:3.0.0 from the shaded 
jar.
[INFO] Excluding com.squareup.okhttp:okhttp:jar:2.4.0 from the shaded jar.
[INFO] Excluding com.squareup.okio:okio:jar:1.4.0 from the shaded jar.
[INFO] Excluding com.google.inject.extensions:guice-servlet:jar:4.0 from the 
shaded jar.
[INFO] Excluding io.netty:netty:jar:3.10.5.Final from the shaded jar.
[INFO] Excluding org.jruby.joni:joni:jar:2.1.2 from the shaded jar.
[INFO] Excluding com.clearspring.analytics:stream:jar:2.9.5 from the shaded jar.
[INFO] Excluding com.salesforce.i18n:i18n-util:jar:1.0.4 from the shaded jar.
[INFO] Excluding com.ibm.icu:icu4j:jar:60.2 from the shaded jar.
[INFO] Excluding com.ibm.icu:icu4j-localespi:jar:60.2 from the shaded jar.
[INFO] Excluding com.ibm.icu:icu4j-charset:jar:60.2 from the shaded jar.
[INFO] Excluding com.lmax:disruptor:jar:3.3.6 from the shaded jar.
[INFO] Excluding commons-logging:commons-logging:jar:1.2 from the shaded jar.
[INFO] Including javax.servlet:javax.servlet-api:jar:3.1.0 in the shaded jar.
[INFO] Excluding org.apache.hbase:hbase-protocol-shaded:jar:2.0.0 from the 
shaded jar.
[INFO] Excluding 
org.apache.hbase.thirdparty:hbase-shaded-miscellaneous:jar:2.1.0 from the 
shaded jar.
[INFO] Excluding io.dropwizard.metrics:metrics-core:jar:3.2.1 from the shaded 
jar.
[INFO] Excluding org.apache.commons:commons-math3:jar:3.6.1 from the shaded jar.
[INFO] Excluding org.apache.commons:commons-lang3:jar:3.6 from the shaded jar.
[INFO] Excluding org.apache.htrace:htrace-core4:jar:4.2.0-incubating from the 
shaded jar.
[INFO] Excluding org.glassfish.jersey.core:jersey-client:jar:2.25.1 from the 
shaded jar.
[INFO] Excluding org.glassfish.jersey.core:jersey-common:jar:2.25.1 from the 
shaded jar.
[INFO] Excluding 
org.glassfish.jersey.bundles.repackaged:jersey-guava:jar:2.25.1 from the shaded 
jar.
[INFO] Excluding org.glassfish.hk2:osgi-resource-locator:jar:1.0.1 from the 
shaded jar.
[INFO] Excluding org.glassfish.hk2:hk2-api:jar:2.5.0-b32 from the shaded jar.
[INFO] Excluding org.glassfish.hk2:hk2-utils:jar:2.5.0-b32 from the shaded jar.
[INFO] Excluding 
org.glassfish.hk2.external:aopalliance-repackaged:jar:2.5.0-b32 from the shaded 
jar.
[INFO] Excluding org.glassfish.hk2.external:javax.inject:jar:2.5.0-b32 from the 
shaded jar.
[INFO] Excluding org.glassfish.hk2:hk2-locator:jar:2.5.0-b32 from the shaded 
jar.
[INFO] Excluding org.javassist:javassist:jar:3.20.0-GA from the shaded jar.
[INFO] Excluding org.apache.yetus:audience-annotations:jar:0.5.0 from the 
shaded jar.
[INFO] Excluding org.apache.hbase:hbase-zookeeper:jar:2.0.0 from the shaded jar.
[INFO] Excluding org.slf4j:slf4j-log4j12:jar:1.7.25 from the shaded jar.
[INFO] Excluding 

[phoenix] Git Push Summary

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/system-catalog [deleted] f7d87ce2f


[07/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 34292ba..fdfd75b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -28,172 +28,119 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
+import java.math.BigDecimal;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.exception.PhoenixIOException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Maps;
 
+@RunWith(Parameterized.class)
+public class ViewIT extends SplitSystemCatalogIT {
 
-public class ViewIT extends BaseViewIT {
-   
-public ViewIT(boolean transactional) {
-   super(transactional);
-   }
-
-@Test
-public void testReadOnlyOnReadOnlyView() throws Exception {
-Connection earlierCon = DriverManager.getConnection(getUrl());
-Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE TABLE " + fullTableName + " (k INTEGER NOT NULL 
PRIMARY KEY, v1 DATE) "+ tableDDLOptions;
-conn.createStatement().execute(ddl);
-String fullParentViewName = "V_" + generateUniqueName();
-ddl = "CREATE VIEW " + fullParentViewName + " (v2 VARCHAR) AS SELECT * 
FROM " + fullTableName + " WHERE k > 5";
-conn.createStatement().execute(ddl);
-try {
-conn.createStatement().execute("UPSERT INTO " + fullParentViewName 
+ " VALUES(1)");
-fail();
-} catch (ReadOnlyTableException e) {
-
-}
-for (int i = 0; i < 10; i++) {
-conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + i + ")");
-}
-conn.commit();
-
-analyzeTable(conn, fullParentViewName, transactional);
-
-List splits = getAllSplits(conn, fullParentViewName);
-assertEquals(4, splits.size());
-
-int count = 0;
-ResultSet rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullTableName);
-while (rs.next()) {
-assertEquals(count++, rs.getInt(1));
-}
-assertEquals(10, count);
-
-count = 0;
-rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullParentViewName);
-while (rs.next()) {
-

[09/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index ab3a4ab..e39d492 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -21,6 +21,8 @@ import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -33,37 +35,46 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Properties;
+import java.util.List;
 
 import org.apache.commons.lang.ArrayUtils;
-import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.TephraTransactionalProcessor;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
-import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.StringUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
 @RunWith(Parameterized.class)
-public class AlterTableWithViewsIT extends ParallelStatsDisabledIT {
-
+public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
+
 private final boolean isMultiTenant;
 private final boolean columnEncoded;
-
-private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant2";
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
+private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT2;
 
 public AlterTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) 
{
 this.isMultiTenant = isMultiTenant;
@@ -77,6 +88,14 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 { true, false }, { true, true } });
 }
 
+// transform PColumn to String
+private Function function = new Function(){
+@Override
+public String apply(PColumn input) {
+return input.getName().getString();
+}
+};
+
 private String generateDDL(String format) {
 return generateDDL("", format);
 }
@@ -101,8 +120,9 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 public void testAddNewColumnsToBaseTableWithViews() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl());
 Connection viewConn = isMultiTenant ? 
DriverManager.getConnection(TENANT_SPECIFIC_URL1) : conn ) {   
-String tableName = generateUniqueName();
-String viewOfTable = tableName + "_VIEW";
+String tableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String viewOfTable = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+
 String ddlFormat = "CREATE TABLE IF NOT EXISTS " + tableName + " ("
 + " %s ID char(1) NOT NULL,"
 + " COL1 integer NOT NULL,"
@@ -113,12 +133,13 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {

[10/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and 
Rahul Gidwani)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c53d9ada
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c53d9ada
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c53d9ada

Branch: refs/heads/master
Commit: c53d9adad37750b983f0a215f42ffe53803e0aaa
Parents: bc4ca79
Author: Thomas D'Silva 
Authored: Sat Jul 14 11:34:47 2018 -0700
Committer: Thomas D'Silva 
Committed: Wed Jul 18 18:38:33 2018 -0700

--
 .../StatisticsCollectionRunTrackerIT.java   |2 +-
 .../AlterMultiTenantTableWithViewsIT.java   |  284 +-
 .../apache/phoenix/end2end/AlterTableIT.java|   45 +-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  545 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |4 +-
 .../end2end/BaseTenantSpecificViewIndexIT.java  |   38 +-
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |   69 +-
 .../MigrateSystemTablesToSystemNamespaceIT.java |   38 +-
 .../apache/phoenix/end2end/PhoenixDriverIT.java |   37 +-
 .../end2end/QueryDatabaseMetaDataIT.java|9 +-
 .../apache/phoenix/end2end/SaltedViewIT.java|   45 -
 .../phoenix/end2end/SplitSystemCatalogIT.java   |   80 +
 .../end2end/SplitSystemCatalogTests.java|   11 +
 .../StatsEnabledSplitSystemCatalogIT.java   |  244 ++
 .../SystemCatalogCreationOnConnectionIT.java|   34 +-
 .../apache/phoenix/end2end/SystemCatalogIT.java |   31 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |   13 +-
 .../end2end/TenantSpecificViewIndexIT.java  |   68 +-
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  319 +--
 .../java/org/apache/phoenix/end2end/ViewIT.java |  868 --
 .../phoenix/end2end/index/BaseIndexIT.java  |   43 +-
 .../index/ChildViewsUseParentViewIndexIT.java   |7 +-
 .../phoenix/end2end/index/DropColumnIT.java |  117 -
 .../phoenix/end2end/index/IndexMetadataIT.java  |4 +-
 .../phoenix/end2end/index/MutableIndexIT.java   |  803 +++---
 .../phoenix/end2end/index/ViewIndexIT.java  |   68 +-
 .../apache/phoenix/execute/PartialCommitIT.java |4 +-
 .../SystemCatalogWALEntryFilterIT.java  |   85 +-
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   |9 +-
 .../ColumnNameTrackingExpressionCompiler.java   |   46 +
 .../phoenix/compile/CreateTableCompiler.java|2 +-
 .../apache/phoenix/compile/FromCompiler.java|   15 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../apache/phoenix/compile/UnionCompiler.java   |2 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   | 2577 +-
 .../phoenix/coprocessor/MetaDataProtocol.java   |3 +-
 .../apache/phoenix/coprocessor/TableInfo.java   |   79 +
 .../coprocessor/TableViewFinderResult.java  |   48 +
 .../apache/phoenix/coprocessor/ViewFinder.java  |  144 +
 .../coprocessor/WhereConstantParser.java|  106 +
 .../coprocessor/generated/MetaDataProtos.java   |  626 -
 .../coprocessor/generated/PTableProtos.java |  323 ++-
 .../phoenix/expression/LikeExpression.java  |2 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |8 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  534 ++--
 .../apache/phoenix/jdbc/PhoenixStatement.java   |8 +-
 .../phoenix/parse/DropTableStatement.java   |8 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |2 +-
 .../phoenix/query/ConnectionQueryServices.java  |   17 +-
 .../query/ConnectionQueryServicesImpl.java  |   43 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   13 +-
 .../query/DelegateConnectionQueryServices.java  |8 +-
 .../apache/phoenix/query/QueryConstants.java|   14 +-
 .../org/apache/phoenix/query/QueryServices.java |2 +
 .../phoenix/query/QueryServicesOptions.java |2 +
 .../SystemCatalogWALEntryFilter.java|   45 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   15 +
 .../apache/phoenix/schema/MetaDataClient.java   |   57 +-
 .../phoenix/schema/MetaDataSplitPolicy.java |   26 +-
 .../java/org/apache/phoenix/schema/PColumn.java |   12 +
 .../org/apache/phoenix/schema/PColumnImpl.java  |  113 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|3 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   17 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  279 +-
 .../org/apache/phoenix/schema/PTableKey.java|4 +-
 .../schema/ParentTableNotFoundException.java|   30 +
 .../org/apache/phoenix/schema/SaltingUtil.java  |4 +-
 .../apache/phoenix/schema/TableProperty.java|   22 +-
 .../java/org/apache/phoenix/util/IndexUtil.java |   16 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |  171 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |1 -
 

[02/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
index 45aca98..a267629 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.schema;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.query.QueryConstants;
@@ -42,36 +43,63 @@ public class PColumnImpl implements PColumn {
 private boolean isRowTimestamp;
 private boolean isDynamic;
 private byte[] columnQualifierBytes;
-
+private boolean derived;
+private long timestamp;
+
 public PColumnImpl() {
 }
 
-public PColumnImpl(PName name,
-   PName familyName,
-   PDataType dataType,
-   Integer maxLength,
-   Integer scale,
-   boolean nullable,
-   int position,
-   SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic, byte[] columnQualifierBytes) {
-init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes);
+public PColumnImpl(PColumn column, int position) {
+this(column, column.isDerived(), position);
 }
 
-public PColumnImpl(PColumn column, int position) {
+public PColumnImpl(PColumn column, byte[] viewConstant, boolean 
isViewReferenced) {
+this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
+column.getScale(), column.isNullable(), column.getPosition(), 
column.getSortOrder(), column.getArraySize(), viewConstant, isViewReferenced, 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getColumnQualifierBytes(),
+column.getTimestamp(), column.isDerived());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position) {
+this(column, derivedColumn, position, column.getViewConstant());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position, 
byte[] viewConstant) {
 this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
-column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), column.getViewConstant(), 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes());
+column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), viewConstant, 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes(),
+column.getTimestamp(), derivedColumn);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp) {
+this(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, false);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp, boolean derived) {
+init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, derived);
+}
+
+private PColumnImpl(PName familyName, PName columnName, Long timestamp) {
+this.familyName = familyName;
+this.name = columnName;
+this.derived = true;
+if (timestamp!=null) {
+this.timestamp = timestamp;
+}
 }
 
-private void init(PName name,
-PName familyName,
-PDataType 

[01/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master bc4ca79ee -> c53d9adad


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index b127408..9d5583b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -82,12 +82,12 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
@@ -96,6 +96,9 @@ import org.apache.phoenix.coprocessor.MetaDataEndpointImpl;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
@@ -175,11 +178,6 @@ public class UpgradeUtil {
 private static final String DELETE_LINK = "DELETE FROM " + 
SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE
 + " WHERE (" + TABLE_SCHEM + "=? OR (" + TABLE_SCHEM + " IS NULL 
AND ? IS NULL)) AND " + TABLE_NAME + "=? AND " + COLUMN_FAMILY + "=? AND " + 
LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue();
 
-private static final String GET_VIEWS_QUERY = "SELECT " + TENANT_ID + "," 
+ TABLE_SCHEM + "," + TABLE_NAME
-+ " FROM " + SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE + 
" WHERE " + COLUMN_FAMILY + " = ? AND "
-+ LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue() 
+ " AND ( " + TABLE_TYPE + "=" + "'"
-+ PTableType.VIEW.getSerializedValue() + "' OR " + TABLE_TYPE + " 
IS NULL) ORDER BY "+TENANT_ID;
-
 private UpgradeUtil() {
 }
 
@@ -225,8 +223,8 @@ public class UpgradeUtil {
 scan.setRaw(true);
 scan.setMaxVersions();
 ResultScanner scanner = null;
-HTableInterface source = null;
-HTableInterface target = null;
+Table source = null;
+Table target = null;
 try {
 source = conn.getQueryServices().getTable(sourceName);
 target = conn.getQueryServices().getTable(targetName);
@@ -646,7 +644,7 @@ public class UpgradeUtil {
 logger.info("Upgrading SYSTEM.SEQUENCE table");
 
 byte[] seqTableKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_SCHEMA, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE);
-HTableInterface sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
+Table sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
 try {
 logger.info("Setting SALT_BUCKETS property of SYSTEM.SEQUENCE to " 
+ SaltingUtil.MAX_BUCKET_NUM);
 KeyValue saltKV = KeyValueUtil.newKeyValue(seqTableKey, 
@@ -699,7 +697,7 @@ public class UpgradeUtil {
 Scan scan = new Scan();
 scan.setRaw(true);
 scan.setMaxVersions();
-HTableInterface seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
+Table seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
 try {
 boolean committed = false;
 logger.info("Adding salt byte to all SYSTEM.SEQUENCE 
rows");
@@ -1149,6 +1147,78 @@ public class UpgradeUtil {
 }
 }
 
+/**
+ * Move child links form SYSTEM.CATALOG to SYSTEM.CHILD_LINK
+ * @param oldMetaConnection caller should take care of closing the passed 
connection appropriately
+ * @throws SQLException
+ */
+public static void moveChildLinks(PhoenixConnection oldMetaConnection) 
throws SQLException {
+PhoenixConnection metaConnection = null;
+ 

[03/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 8dd4a88..dab1048 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -29,9 +29,10 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.ColumnProjector;
 import org.apache.phoenix.compile.ExpressionProjector;
@@ -40,7 +41,12 @@ import org.apache.phoenix.compile.StatementContext;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.LikeExpression;
+import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
+import org.apache.phoenix.expression.StringBasedLikeExpression;
 import org.apache.phoenix.expression.function.ExternalSqlTypeIdFunction;
 import org.apache.phoenix.expression.function.IndexStateNameFunction;
 import org.apache.phoenix.expression.function.SQLIndexTypeFunction;
@@ -48,25 +54,33 @@ import 
org.apache.phoenix.expression.function.SQLTableTypeFunction;
 import org.apache.phoenix.expression.function.SQLViewTypeFunction;
 import org.apache.phoenix.expression.function.SqlTypeNameFunction;
 import org.apache.phoenix.expression.function.TransactionProviderNameFunction;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
-import org.apache.phoenix.iterate.DelegateResultIterator;
 import org.apache.phoenix.iterate.MaterializedResultIterator;
 import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.parse.LikeParseNode.LikeType;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.MetaDataClient;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PColumnImpl;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.RowKeyValueAccessor;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
 import org.apache.phoenix.schema.tuple.SingleKeyValueTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PBoolean;
 import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PVarbinary;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.KeyValueUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 
@@ -336,6 +350,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final byte[] COLUMN_QUALIFIER_COUNTER_BYTES = 
Bytes.toBytes(COLUMN_QUALIFIER_COUNTER);
 public static final String USE_STATS_FOR_PARALLELIZATION = 
"USE_STATS_FOR_PARALLELIZATION";
 public static final byte[] USE_STATS_FOR_PARALLELIZATION_BYTES = 
Bytes.toBytes(USE_STATS_FOR_PARALLELIZATION);
+
+public static final String SYSTEM_CHILD_LINK_TABLE = "CHILD_LINK";
+public static final String SYSTEM_CHILD_LINK_NAME = 
SchemaUtil.getTableName(SYSTEM_CATALOG_SCHEMA, SYSTEM_CHILD_LINK_TABLE);
+public static final byte[] SYSTEM_CHILD_LINK_NAME_BYTES = 
Bytes.toBytes(SYSTEM_CHILD_LINK_NAME);
+public static final TableName SYSTEM_LINK_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_CHILD_LINK_NAME);
 
 
 //SYSTEM:LOG
@@ -467,179 +486,352 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 private static void appendConjunction(StringBuilder buf) {
 buf.append(buf.length() == 0 ? "" : " and ");
 }
-
+
+private static final PColumnImpl TENANT_ID_COLUMN = 

[06/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index cfaed72..4433e12 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -35,8 +35,6 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
-import jline.internal.Log;
-
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.MetaTableAccessor;
 import org.apache.hadoop.hbase.TableName;
@@ -75,25 +73,27 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.primitives.Doubles;
 
+import jline.internal.Log;
+
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
 
 protected final boolean localIndex;
 private final String tableDDLOptions;
-   
+
 public MutableIndexIT(Boolean localIndex, String txProvider, Boolean 
columnEncoded) {
-   this.localIndex = localIndex;
-   StringBuilder optionBuilder = new StringBuilder();
-   if (txProvider != null) {
-   optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
-   }
-   if (!columnEncoded) {
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+if (txProvider != null) {
+optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
+}
+if (!columnEncoded) {
 if (optionBuilder.length()!=0)
 optionBuilder.append(",");
 optionBuilder.append("COLUMN_ENCODED_BYTES=0");
 }
-   this.tableDDLOptions = optionBuilder.toString();
-   }
+this.tableDDLOptions = optionBuilder.toString();
+}
 
 private static Connection getConnection(Properties props) throws 
SQLException {
 
props.setProperty(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(1));
@@ -106,7 +106,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 return getConnection(props);
 }
 
-   
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
+
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
 public static Collection data() {
 return Arrays.asList(new Object[][] { 
 { false, null, false }, { false, null, true },
@@ -121,16 +121,16 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testCoveredColumnUpdates() throws Exception {
 try (Connection conn = getConnection()) {
-   conn.setAutoCommit(false);
-   String tableName = "TBL_" + generateUniqueName();
-   String indexName = "IDX_" + generateUniqueName();
-   String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
-   String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+conn.setAutoCommit(false);
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
 
-   TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
+TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
 populateMultiCFTestTable(fullTableName);
 conn.createStatement().execute("CREATE " + (localIndex ? " LOCAL " 
: "") + " INDEX " + indexName + " ON " + fullTableName 
-   + " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
++ " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
 
 String query = "SELECT char_col1, int_col1, long_col2 from " + 
fullTableName;
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + 
query);
@@ -203,7 +203,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 query = "SELECT b.* from " + fullTableName + " where int_col1 
= 4";
 rs = 

[05/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index ae2fa66..5e8a5dc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -17,8 +17,6 @@
  */
 package org.apache.phoenix.coprocessor;
 
-import static com.google.common.base.Preconditions.checkArgument;
-import static com.google.common.base.Preconditions.checkState;
 import static org.apache.hadoop.hbase.KeyValueUtil.createFirstOnRow;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.APPEND_ONLY_SCHEMA_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ARRAY_SIZE_BYTES;
@@ -55,7 +53,6 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MULTI_TENANT_BYTES
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NULLABLE_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NUM_ARGS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PK_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
@@ -78,9 +75,8 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID_BYTE
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_STATEMENT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_TYPE_BYTES;
 import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_COUNT;
-import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
-import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static org.apache.phoenix.schema.PTableType.TABLE;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
@@ -91,14 +87,16 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.ListIterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.NavigableMap;
+import java.util.Properties;
 import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
@@ -108,26 +106,21 @@ import org.apache.hadoop.hbase.Coprocessor;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.ipc.RpcServer.Call;
 import org.apache.hadoop.hbase.ipc.RpcUtil;
@@ -140,6 +133,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.VersionInfo;
 import org.apache.phoenix.cache.GlobalCache;
 import org.apache.phoenix.cache.GlobalCache.FunctionBytesPtr;
+import org.apache.phoenix.compile.ColumnNameTrackingExpressionCompiler;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.compile.QueryPlan;
@@ -183,6 +177,7 @@ import 

[08/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
new file mode 100644
index 000..51d3b86
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Base class for tests that run with split SYSTEM.CATALOG.
+ * 
+ */
+@Category(SplitSystemCatalogTests.class)
+public class SplitSystemCatalogIT extends BaseTest {
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+protected static String TENANT1 = "tenant1";
+protected static String TENANT2 = "tenant2";
+
+@BeforeClass
+public static void doSetup() throws Exception {
+NUM_SLAVES_BASE = 6;
+Map props = Collections.emptyMap();
+boolean splitSystemCatalog = (driver == null);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+// Split SYSTEM.CATALOG once after the mini-cluster is started
+if (splitSystemCatalog) {
+splitSystemCatalog();
+}
+}
+
+protected static void splitSystemCatalog() throws SQLException, Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+}
+String tableName = "TABLE";
+String fullTableName1 = SchemaUtil.getTableName(SCHEMA1, tableName);
+String fullTableName2 = SchemaUtil.getTableName(SCHEMA2, tableName);
+String fullTableName3 = SchemaUtil.getTableName(SCHEMA3, tableName);
+String fullTableName4 = SchemaUtil.getTableName(SCHEMA4, tableName);
+ArrayList tableList = Lists.newArrayList(fullTableName1, 
fullTableName2, fullTableName3);
+Map> tenantToTableMap = Maps.newHashMap();
+tenantToTableMap.put(null, tableList);
+tenantToTableMap.put(TENANT1, Lists.newArrayList(fullTableName2, 
fullTableName3));
+tenantToTableMap.put(TENANT2, Lists.newArrayList(fullTableName4));
+splitSystemCatalog(tenantToTableMap);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
new file mode 100644
index 000..27fc5c6
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
@@ -0,0 +1,11 @@
+package org.apache.phoenix.end2end;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.TYPE)
+public @interface SplitSystemCatalogTests {
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c53d9ada/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
 

[04/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index 0bd1f8c..874a382 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -94,8 +94,10 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 // TODO Was there a system table upgrade?
 // TODO Need to account for the inevitable 4.14 release too
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_5_0_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = 
MIN_TABLE_TIMESTAMP + 29;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_5_1_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_5_0_0;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_5_1_0;
 
 // Version below which we should disallow usage of mutable secondary 
indexing.
 public static final int MUTABLE_SI_VERSION_THRESHOLD = 
VersionUtil.encodeVersion("0", "94", "10");

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
new file mode 100644
index 000..b1c5f65
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.util.SchemaUtil;
+
+public class TableInfo {
+
+private final byte[] tenantId;
+private final byte[] schema;
+private final byte[] name;
+
+public TableInfo(byte[] tenantId, byte[] schema, byte[] name) {
+this.tenantId = tenantId;
+this.schema = schema;
+this.name = name;
+}
+
+public byte[] getRowKeyPrefix() {
+return SchemaUtil.getTableKey(tenantId, schema, name);
+}
+
+@Override
+public String toString() {
+return Bytes.toStringBinary(getRowKeyPrefix());
+}
+
+public byte[] getTenantId() {
+return tenantId;
+}
+
+public byte[] getSchemaName() {
+return schema;
+}
+
+public byte[] getTableName() {
+return name;
+}
+
+@Override
+public int hashCode() {
+final int prime = 31;
+int result = 1;
+result = prime * result + Arrays.hashCode(name);
+result = prime * result + Arrays.hashCode(schema);
+result = prime * result + Arrays.hashCode(tenantId);
+return result;
+}
+
+@Override
+public boolean equals(Object obj) {
+if (this == obj) return true;
+if (obj == null) return false;
+if (getClass() != obj.getClass()) return false;
+TableInfo other = (TableInfo) obj;
+if (!Arrays.equals(name, other.name)) return false;
+if (!Arrays.equals(schema, other.schema)) return false;
+if (!Arrays.equals(tenantId, other.tenantId)) return false;
+return true;
+}
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableViewFinderResult.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableViewFinderResult.java
 

[10/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and 
Rahul Gidwani)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d56fd3c9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d56fd3c9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d56fd3c9

Branch: refs/heads/5.x-HBase-2.0
Commit: d56fd3c991b0295aaa91ab916a0fdaed9c044402
Parents: 0af8b1e
Author: Thomas D'Silva 
Authored: Sat Jul 14 11:34:47 2018 -0700
Committer: Thomas D'Silva 
Committed: Wed Jul 18 11:23:26 2018 -0700

--
 .../coprocessor/MetaDataEndpointImplIT.java |  294 ++
 .../StatisticsCollectionRunTrackerIT.java   |2 +-
 .../AlterMultiTenantTableWithViewsIT.java   |  284 +-
 .../apache/phoenix/end2end/AlterTableIT.java|   45 +-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  541 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |4 +-
 .../end2end/BaseTenantSpecificViewIndexIT.java  |   38 +-
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |   69 +-
 .../MigrateSystemTablesToSystemNamespaceIT.java |   38 +-
 .../apache/phoenix/end2end/PhoenixDriverIT.java |   37 +-
 .../end2end/QueryDatabaseMetaDataIT.java|9 +-
 .../apache/phoenix/end2end/SaltedViewIT.java|   45 -
 .../phoenix/end2end/SplitSystemCatalogIT.java   |   80 +
 .../end2end/SplitSystemCatalogTests.java|   11 +
 .../StatsEnabledSplitSystemCatalogIT.java   |  244 ++
 .../SystemCatalogCreationOnConnectionIT.java|   34 +-
 .../apache/phoenix/end2end/SystemCatalogIT.java |   50 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |   13 +-
 .../end2end/TenantSpecificViewIndexIT.java  |   68 +-
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  322 +--
 .../java/org/apache/phoenix/end2end/ViewIT.java |  844 --
 .../phoenix/end2end/index/BaseIndexIT.java  |   43 +-
 .../index/ChildViewsUseParentViewIndexIT.java   |7 +-
 .../phoenix/end2end/index/DropColumnIT.java |  117 -
 .../phoenix/end2end/index/IndexMetadataIT.java  |4 +-
 .../phoenix/end2end/index/MutableIndexIT.java   |  876 +++---
 .../phoenix/end2end/index/ViewIndexIT.java  |   68 +-
 .../apache/phoenix/execute/PartialCommitIT.java |4 +-
 .../SystemCatalogWALEntryFilterIT.java  |   78 +-
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   |9 +-
 .../ColumnNameTrackingExpressionCompiler.java   |   46 +
 .../phoenix/compile/CreateTableCompiler.java|2 +-
 .../apache/phoenix/compile/FromCompiler.java|   15 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../apache/phoenix/compile/UnionCompiler.java   |2 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   | 2672 +-
 .../phoenix/coprocessor/MetaDataProtocol.java   |4 +-
 .../apache/phoenix/coprocessor/TableInfo.java   |   79 +
 .../coprocessor/TableViewFinderResult.java  |   48 +
 .../apache/phoenix/coprocessor/ViewFinder.java  |  144 +
 .../coprocessor/WhereConstantParser.java|  106 +
 .../coprocessor/generated/MetaDataProtos.java   |  626 +++-
 .../coprocessor/generated/PTableProtos.java |  323 ++-
 .../phoenix/expression/LikeExpression.java  |2 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |8 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  531 ++--
 .../apache/phoenix/jdbc/PhoenixStatement.java   |8 +-
 .../phoenix/parse/DropTableStatement.java   |8 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |2 +-
 .../phoenix/query/ConnectionQueryServices.java  |   17 +-
 .../query/ConnectionQueryServicesImpl.java  |   43 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   13 +-
 .../query/DelegateConnectionQueryServices.java  |8 +-
 .../apache/phoenix/query/QueryConstants.java|   15 +-
 .../org/apache/phoenix/query/QueryServices.java |2 +
 .../phoenix/query/QueryServicesOptions.java |2 +
 .../SystemCatalogWALEntryFilter.java|   47 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   15 +
 .../apache/phoenix/schema/MetaDataClient.java   |   57 +-
 .../phoenix/schema/MetaDataSplitPolicy.java |   26 +-
 .../java/org/apache/phoenix/schema/PColumn.java |   12 +
 .../org/apache/phoenix/schema/PColumnImpl.java  |  113 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|3 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   17 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  279 +-
 .../org/apache/phoenix/schema/PTableKey.java|4 +-
 .../schema/ParentTableNotFoundException.java|   30 +
 .../org/apache/phoenix/schema/SaltingUtil.java  |4 +-
 .../SplitOnLeadingVarCharColumnsPolicy.java |3 +
 .../apache/phoenix/schema/TableProperty.java|   22 +-
 .../java/org/apache/phoenix/util/IndexUtil.java |   16 +-
 

[08/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
new file mode 100644
index 000..51d3b86
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Base class for tests that run with split SYSTEM.CATALOG.
+ * 
+ */
+@Category(SplitSystemCatalogTests.class)
+public class SplitSystemCatalogIT extends BaseTest {
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+protected static String TENANT1 = "tenant1";
+protected static String TENANT2 = "tenant2";
+
+@BeforeClass
+public static void doSetup() throws Exception {
+NUM_SLAVES_BASE = 6;
+Map props = Collections.emptyMap();
+boolean splitSystemCatalog = (driver == null);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+// Split SYSTEM.CATALOG once after the mini-cluster is started
+if (splitSystemCatalog) {
+splitSystemCatalog();
+}
+}
+
+protected static void splitSystemCatalog() throws SQLException, Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+}
+String tableName = "TABLE";
+String fullTableName1 = SchemaUtil.getTableName(SCHEMA1, tableName);
+String fullTableName2 = SchemaUtil.getTableName(SCHEMA2, tableName);
+String fullTableName3 = SchemaUtil.getTableName(SCHEMA3, tableName);
+String fullTableName4 = SchemaUtil.getTableName(SCHEMA4, tableName);
+ArrayList tableList = Lists.newArrayList(fullTableName1, 
fullTableName2, fullTableName3);
+Map> tenantToTableMap = Maps.newHashMap();
+tenantToTableMap.put(null, tableList);
+tenantToTableMap.put(TENANT1, Lists.newArrayList(fullTableName2, 
fullTableName3));
+tenantToTableMap.put(TENANT2, Lists.newArrayList(fullTableName4));
+splitSystemCatalog(tenantToTableMap);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
new file mode 100644
index 000..27fc5c6
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
@@ -0,0 +1,11 @@
+package org.apache.phoenix.end2end;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.TYPE)
+public @interface SplitSystemCatalogTests {
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
 

[06/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 12e0dbf..45536e2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -34,8 +34,6 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
-import jline.internal.Log;
-
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Put;
@@ -72,25 +70,27 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.primitives.Doubles;
 
+import jline.internal.Log;
+
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
 
 protected final boolean localIndex;
 private final String tableDDLOptions;
-   
+
 public MutableIndexIT(Boolean localIndex, String txProvider, Boolean 
columnEncoded) {
-   this.localIndex = localIndex;
-   StringBuilder optionBuilder = new StringBuilder();
-   if (txProvider != null) {
-   optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
-   }
-   if (!columnEncoded) {
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+if (txProvider != null) {
+optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
+}
+if (!columnEncoded) {
 if (optionBuilder.length()!=0)
 optionBuilder.append(",");
 optionBuilder.append("COLUMN_ENCODED_BYTES=0");
 }
-   this.tableDDLOptions = optionBuilder.toString();
-   }
+this.tableDDLOptions = optionBuilder.toString();
+}
 
 private static Connection getConnection(Properties props) throws 
SQLException {
 
props.setProperty(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(1));
@@ -103,7 +103,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 return getConnection(props);
 }
 
-   
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
+
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
 public static Collection data() {
 return Arrays.asList(new Object[][] { 
 { false, null, false }, { false, null, true },
@@ -118,16 +118,16 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testCoveredColumnUpdates() throws Exception {
 try (Connection conn = getConnection()) {
-   conn.setAutoCommit(false);
-   String tableName = "TBL_" + generateUniqueName();
-   String indexName = "IDX_" + generateUniqueName();
-   String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
-   String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+conn.setAutoCommit(false);
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
 
-   TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
+TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
 populateMultiCFTestTable(fullTableName);
 conn.createStatement().execute("CREATE " + (localIndex ? " LOCAL " 
: "") + " INDEX " + indexName + " ON " + fullTableName 
-   + " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
++ " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
 
 String query = "SELECT char_col1, int_col1, long_col2 from " + 
fullTableName;
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + 
query);
@@ -200,7 +200,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 query = "SELECT b.* from " + fullTableName + " where int_col1 
= 4";
 rs = 

[02/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
index 45aca98..a267629 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.schema;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.query.QueryConstants;
@@ -42,36 +43,63 @@ public class PColumnImpl implements PColumn {
 private boolean isRowTimestamp;
 private boolean isDynamic;
 private byte[] columnQualifierBytes;
-
+private boolean derived;
+private long timestamp;
+
 public PColumnImpl() {
 }
 
-public PColumnImpl(PName name,
-   PName familyName,
-   PDataType dataType,
-   Integer maxLength,
-   Integer scale,
-   boolean nullable,
-   int position,
-   SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic, byte[] columnQualifierBytes) {
-init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes);
+public PColumnImpl(PColumn column, int position) {
+this(column, column.isDerived(), position);
 }
 
-public PColumnImpl(PColumn column, int position) {
+public PColumnImpl(PColumn column, byte[] viewConstant, boolean 
isViewReferenced) {
+this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
+column.getScale(), column.isNullable(), column.getPosition(), 
column.getSortOrder(), column.getArraySize(), viewConstant, isViewReferenced, 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getColumnQualifierBytes(),
+column.getTimestamp(), column.isDerived());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position) {
+this(column, derivedColumn, position, column.getViewConstant());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position, 
byte[] viewConstant) {
 this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
-column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), column.getViewConstant(), 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes());
+column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), viewConstant, 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes(),
+column.getTimestamp(), derivedColumn);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp) {
+this(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, false);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp, boolean derived) {
+init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, derived);
+}
+
+private PColumnImpl(PName familyName, PName columnName, Long timestamp) {
+this.familyName = familyName;
+this.name = columnName;
+this.derived = true;
+if (timestamp!=null) {
+this.timestamp = timestamp;
+}
 }
 
-private void init(PName name,
-PName familyName,
-PDataType 

[03/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 9f4cf97..56d8698 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -30,9 +30,10 @@ import java.util.List;
 
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.CellComparatorImpl;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.ColumnProjector;
 import org.apache.phoenix.compile.ExpressionProjector;
@@ -41,7 +42,12 @@ import org.apache.phoenix.compile.StatementContext;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.LikeExpression;
+import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
+import org.apache.phoenix.expression.StringBasedLikeExpression;
 import org.apache.phoenix.expression.function.ExternalSqlTypeIdFunction;
 import org.apache.phoenix.expression.function.IndexStateNameFunction;
 import org.apache.phoenix.expression.function.SQLIndexTypeFunction;
@@ -49,26 +55,34 @@ import 
org.apache.phoenix.expression.function.SQLTableTypeFunction;
 import org.apache.phoenix.expression.function.SQLViewTypeFunction;
 import org.apache.phoenix.expression.function.SqlTypeNameFunction;
 import org.apache.phoenix.expression.function.TransactionProviderNameFunction;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.hbase.index.util.VersionUtil;
-import org.apache.phoenix.iterate.DelegateResultIterator;
 import org.apache.phoenix.iterate.MaterializedResultIterator;
 import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.parse.LikeParseNode.LikeType;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.MetaDataClient;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PColumnImpl;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.RowKeyValueAccessor;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
 import org.apache.phoenix.schema.tuple.SingleKeyValueTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PBoolean;
 import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PVarbinary;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.PhoenixKeyValueUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 
@@ -354,6 +368,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final byte[] COLUMN_QUALIFIER_COUNTER_BYTES = 
Bytes.toBytes(COLUMN_QUALIFIER_COUNTER);
 public static final String USE_STATS_FOR_PARALLELIZATION = 
"USE_STATS_FOR_PARALLELIZATION";
 public static final byte[] USE_STATS_FOR_PARALLELIZATION_BYTES = 
Bytes.toBytes(USE_STATS_FOR_PARALLELIZATION);
+
+public static final String SYSTEM_CHILD_LINK_TABLE = "CHILD_LINK";
+public static final String SYSTEM_CHILD_LINK_NAME = 
SchemaUtil.getTableName(SYSTEM_CATALOG_SCHEMA, SYSTEM_CHILD_LINK_TABLE);
+public static final byte[] SYSTEM_CHILD_LINK_NAME_BYTES = 
Bytes.toBytes(SYSTEM_CHILD_LINK_NAME);
+public static final TableName SYSTEM_LINK_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_CHILD_LINK_NAME);
 
 
 //SYSTEM:LOG
@@ -485,179 +504,353 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 private static void appendConjunction(StringBuilder buf) {
 buf.append(buf.length() == 0 ? "" : " and ");
 }
-
+
+private static 

[07/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 72dd26f..558b92e 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -28,172 +28,119 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
+import java.math.BigDecimal;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.TableDescriptorBuilder;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.exception.PhoenixIOException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Maps;
 
+@RunWith(Parameterized.class)
+public class ViewIT extends SplitSystemCatalogIT {
 
-public class ViewIT extends BaseViewIT {
-   
-public ViewIT(boolean transactional) {
-   super(transactional);
-   }
-
-@Test
-public void testReadOnlyOnReadOnlyView() throws Exception {
-Connection earlierCon = DriverManager.getConnection(getUrl());
-Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE TABLE " + fullTableName + " (k INTEGER NOT NULL 
PRIMARY KEY, v1 DATE) "+ tableDDLOptions;
-conn.createStatement().execute(ddl);
-String fullParentViewName = "V_" + generateUniqueName();
-ddl = "CREATE VIEW " + fullParentViewName + " (v2 VARCHAR) AS SELECT * 
FROM " + fullTableName + " WHERE k > 5";
-conn.createStatement().execute(ddl);
-try {
-conn.createStatement().execute("UPSERT INTO " + fullParentViewName 
+ " VALUES(1)");
-fail();
-} catch (ReadOnlyTableException e) {
-
-}
-for (int i = 0; i < 10; i++) {
-conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + i + ")");
-}
-conn.commit();
-
-analyzeTable(conn, fullParentViewName, transactional);
-
-List splits = getAllSplits(conn, fullParentViewName);
-assertEquals(4, splits.size());
-
-int count = 0;
-ResultSet rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullTableName);
-while (rs.next()) {
-assertEquals(count++, rs.getInt(1));
-}
-assertEquals(10, count);
-
-count = 0;
-rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullParentViewName);
-  

[09/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index 472331b..e39d492 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -21,6 +21,8 @@ import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -33,9 +35,13 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Properties;
+import java.util.List;
 
 import org.apache.commons.lang.ArrayUtils;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.TephraTransactionalProcessor;
@@ -43,27 +49,32 @@ import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
-import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.StringUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
 @RunWith(Parameterized.class)
-public class AlterTableWithViewsIT extends ParallelStatsDisabledIT {
-
+public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
+
 private final boolean isMultiTenant;
 private final boolean columnEncoded;
-
-private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant2";
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
+private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT2;
 
 public AlterTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) 
{
 this.isMultiTenant = isMultiTenant;
@@ -77,6 +88,14 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 { true, false }, { true, true } });
 }
 
+// transform PColumn to String
+private Function function = new Function(){
+@Override
+public String apply(PColumn input) {
+return input.getName().getString();
+}
+};
+
 private String generateDDL(String format) {
 return generateDDL("", format);
 }
@@ -101,8 +120,9 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 public void testAddNewColumnsToBaseTableWithViews() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl());
 Connection viewConn = isMultiTenant ? 
DriverManager.getConnection(TENANT_SPECIFIC_URL1) : conn ) {   
-String tableName = generateUniqueName();
-String viewOfTable = tableName + "_VIEW";
+String tableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String viewOfTable = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+
 String ddlFormat = "CREATE TABLE IF NOT EXISTS " + tableName + " ("
 + " %s ID char(1) NOT NULL,"
 + " COL1 integer NOT NULL,"
@@ -113,12 +133,13 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 assertTableDefinition(conn, 

[05/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 8a32d62..e24de29 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -17,8 +17,6 @@
  */
 package org.apache.phoenix.coprocessor;
 
-import static com.google.common.base.Preconditions.checkArgument;
-import static com.google.common.base.Preconditions.checkState;
 import static org.apache.hadoop.hbase.KeyValueUtil.createFirstOnRow;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.APPEND_ONLY_SCHEMA_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ARRAY_SIZE_BYTES;
@@ -55,7 +53,6 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MULTI_TENANT_BYTES
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NULLABLE_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NUM_ARGS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PK_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
@@ -78,9 +75,8 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID_BYTE
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_STATEMENT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_TYPE_BYTES;
 import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_COUNT;
-import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
-import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static org.apache.phoenix.schema.PTableType.TABLE;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
@@ -91,14 +87,16 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.ListIterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.NavigableMap;
+import java.util.Properties;
 import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
@@ -116,19 +114,14 @@ import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
+import org.apache.hadoop.hbase.client.RegionInfo;
 import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessor;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.ipc.RpcCall;
 import org.apache.hadoop.hbase.ipc.RpcUtil;
@@ -141,6 +134,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.VersionInfo;
 import org.apache.phoenix.cache.GlobalCache;
 import org.apache.phoenix.cache.GlobalCache.FunctionBytesPtr;
+import org.apache.phoenix.compile.ColumnNameTrackingExpressionCompiler;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.compile.QueryPlan;
@@ -184,6 +178,7 @@ import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.jdbc.PhoenixResultSet;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.metrics.Metrics;
+import org.apache.phoenix.parse.DropTableStatement;
 import org.apache.phoenix.parse.LiteralParseNode;
 import org.apache.phoenix.parse.PFunction;
 import 

[01/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 0af8b1e32 -> d56fd3c99


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d56fd3c9/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index 1634159..aa3e1a6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -99,6 +99,9 @@ import org.apache.phoenix.coprocessor.MetaDataEndpointImpl;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
@@ -178,11 +181,6 @@ public class UpgradeUtil {
 private static final String DELETE_LINK = "DELETE FROM " + 
SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE
 + " WHERE (" + TABLE_SCHEM + "=? OR (" + TABLE_SCHEM + " IS NULL 
AND ? IS NULL)) AND " + TABLE_NAME + "=? AND " + COLUMN_FAMILY + "=? AND " + 
LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue();
 
-private static final String GET_VIEWS_QUERY = "SELECT " + TENANT_ID + "," 
+ TABLE_SCHEM + "," + TABLE_NAME
-+ " FROM " + SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE + 
" WHERE " + COLUMN_FAMILY + " = ? AND "
-+ LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue() 
+ " AND ( " + TABLE_TYPE + "=" + "'"
-+ PTableType.VIEW.getSerializedValue() + "' OR " + TABLE_TYPE + " 
IS NULL) ORDER BY "+TENANT_ID;
-
 private UpgradeUtil() {
 }
 
@@ -1153,6 +1151,78 @@ public class UpgradeUtil {
 }
 }
 
+/**
+ * Move child links form SYSTEM.CATALOG to SYSTEM.CHILD_LINK
+ * @param oldMetaConnection caller should take care of closing the passed 
connection appropriately
+ * @throws SQLException
+ */
+public static void moveChildLinks(PhoenixConnection oldMetaConnection) 
throws SQLException {
+PhoenixConnection metaConnection = null;
+try {
+// Need to use own connection with max time stamp to be able to 
read all data from SYSTEM.CATALOG 
+metaConnection = new PhoenixConnection(oldMetaConnection, 
HConstants.LATEST_TIMESTAMP);
+logger.info("Upgrading metadata to add parent to child links for 
views");
+metaConnection.commit();
+String createChildLink = "UPSERT INTO SYSTEM.CHILD_LINK(TENANT_ID, 
TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, COLUMN_FAMILY, LINK_TYPE) " +
+"SELECT TENANT_ID, TABLE_SCHEM, 
TABLE_NAME, COLUMN_NAME, COLUMN_FAMILY, LINK_TYPE " + 
+"FROM SYSTEM.CATALOG " + 
+"WHERE LINK_TYPE = 4";
+metaConnection.createStatement().execute(createChildLink);
+metaConnection.commit();
+String deleteChildLink = "DELETE FROM SYSTEM.CATALOG WHERE 
LINK_TYPE = 4 ";
+metaConnection.createStatement().execute(deleteChildLink);
+metaConnection.commit();
+metaConnection.getQueryServices().clearCache();
+} finally {
+if (metaConnection != null) {
+metaConnection.close();
+}
+}
+}
+
+public static void addViewIndexToParentLinks(PhoenixConnection 
oldMetaConnection) throws SQLException {
+   // Need to use own connection with max time stamp to be able to read 
all data from SYSTEM.CATALOG 
+try (PhoenixConnection queryConn = new 
PhoenixConnection(oldMetaConnection, HConstants.LATEST_TIMESTAMP);
+   PhoenixConnection upsertConn = new 
PhoenixConnection(oldMetaConnection, HConstants.LATEST_TIMESTAMP)) {
+logger.info("Upgrading metadata to add parent links for indexes on 
views");
+   String indexQuery = "SELECT TENANT_ID, TABLE_SCHEM, 
TABLE_NAME, COLUMN_FAMILY FROM SYSTEM.CATALOG WHERE LINK_TYPE = "
+   + 
LinkType.INDEX_TABLE.getSerializedValue();
+   String createViewIndexLink = "UPSERT INTO 
SYSTEM.CATALOG (TENANT_ID, TABLE_SCHEM, TABLE_NAME, COLUMN_FAMILY, LINK_TYPE) 
VALUES (?,?,?,?,?) ";
+ResultSet rs = 
queryConn.createStatement().executeQuery(indexQuery);
+String prevTenantId = null;
+PhoenixConnection metaConn = queryConn;
+Properties 

[10/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and 
Rahul Gidwani)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3987c123
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3987c123
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3987c123

Branch: refs/heads/4.x-cdh5.11
Commit: 3987c1230af5e24bab1854b7677d826043500873
Parents: 6b89aa2
Author: Thomas D'Silva 
Authored: Sat Jul 14 11:34:47 2018 -0700
Committer: Thomas D'Silva 
Committed: Wed Jul 18 18:00:05 2018 -0700

--
 .../StatisticsCollectionRunTrackerIT.java   |2 +-
 .../AlterMultiTenantTableWithViewsIT.java   |  284 +-
 .../apache/phoenix/end2end/AlterTableIT.java|   45 +-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  545 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |4 +-
 .../end2end/BaseTenantSpecificViewIndexIT.java  |   38 +-
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |   69 +-
 .../MigrateSystemTablesToSystemNamespaceIT.java |   38 +-
 .../apache/phoenix/end2end/PhoenixDriverIT.java |   37 +-
 .../end2end/QueryDatabaseMetaDataIT.java|9 +-
 .../apache/phoenix/end2end/SaltedViewIT.java|   45 -
 .../phoenix/end2end/SplitSystemCatalogIT.java   |   80 +
 .../end2end/SplitSystemCatalogTests.java|   11 +
 .../StatsEnabledSplitSystemCatalogIT.java   |  244 ++
 .../SystemCatalogCreationOnConnectionIT.java|   34 +-
 .../apache/phoenix/end2end/SystemCatalogIT.java |   31 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |   13 +-
 .../end2end/TenantSpecificViewIndexIT.java  |   68 +-
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  319 +--
 .../java/org/apache/phoenix/end2end/ViewIT.java |  868 --
 .../phoenix/end2end/index/BaseIndexIT.java  |   43 +-
 .../index/ChildViewsUseParentViewIndexIT.java   |7 +-
 .../phoenix/end2end/index/DropColumnIT.java |  117 -
 .../phoenix/end2end/index/IndexMetadataIT.java  |4 +-
 .../phoenix/end2end/index/MutableIndexIT.java   |  842 +++---
 .../phoenix/end2end/index/ViewIndexIT.java  |   68 +-
 .../apache/phoenix/execute/PartialCommitIT.java |4 +-
 .../SystemCatalogWALEntryFilterIT.java  |   85 +-
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   |9 +-
 .../ColumnNameTrackingExpressionCompiler.java   |   46 +
 .../phoenix/compile/CreateTableCompiler.java|2 +-
 .../apache/phoenix/compile/FromCompiler.java|   15 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../apache/phoenix/compile/UnionCompiler.java   |2 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   | 2577 +-
 .../phoenix/coprocessor/MetaDataProtocol.java   |3 +-
 .../apache/phoenix/coprocessor/TableInfo.java   |   79 +
 .../coprocessor/TableViewFinderResult.java  |   48 +
 .../apache/phoenix/coprocessor/ViewFinder.java  |  144 +
 .../coprocessor/WhereConstantParser.java|  106 +
 .../coprocessor/generated/MetaDataProtos.java   |  626 -
 .../coprocessor/generated/PTableProtos.java |  323 ++-
 .../phoenix/expression/LikeExpression.java  |2 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |8 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  534 ++--
 .../apache/phoenix/jdbc/PhoenixStatement.java   |8 +-
 .../phoenix/parse/DropTableStatement.java   |8 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |2 +-
 .../phoenix/query/ConnectionQueryServices.java  |   17 +-
 .../query/ConnectionQueryServicesImpl.java  |   43 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   13 +-
 .../query/DelegateConnectionQueryServices.java  |8 +-
 .../apache/phoenix/query/QueryConstants.java|   14 +-
 .../org/apache/phoenix/query/QueryServices.java |2 +
 .../phoenix/query/QueryServicesOptions.java |2 +
 .../SystemCatalogWALEntryFilter.java|   45 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   15 +
 .../apache/phoenix/schema/MetaDataClient.java   |   57 +-
 .../phoenix/schema/MetaDataSplitPolicy.java |   26 +-
 .../java/org/apache/phoenix/schema/PColumn.java |   12 +
 .../org/apache/phoenix/schema/PColumnImpl.java  |  113 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|3 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   17 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  279 +-
 .../org/apache/phoenix/schema/PTableKey.java|4 +-
 .../schema/ParentTableNotFoundException.java|   30 +
 .../org/apache/phoenix/schema/SaltingUtil.java  |4 +-
 .../apache/phoenix/schema/TableProperty.java|   22 +-
 .../java/org/apache/phoenix/util/IndexUtil.java |   16 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |  171 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |1 -
 

[04/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index 883f96d..29cf2a3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -91,8 +91,9 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_12_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_13_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0 = 
MIN_TABLE_TIMESTAMP + 28;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = 
MIN_TABLE_TIMESTAMP + 29;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 // Version below which we should disallow usage of mutable secondary 
indexing.
 public static final int MUTABLE_SI_VERSION_THRESHOLD = 
VersionUtil.encodeVersion("0", "94", "10");
 public static final int MAX_LOCAL_SI_VERSION_DISALLOW = 
VersionUtil.encodeVersion("0", "98", "8");

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
new file mode 100644
index 000..b1c5f65
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.util.SchemaUtil;
+
+public class TableInfo {
+
+private final byte[] tenantId;
+private final byte[] schema;
+private final byte[] name;
+
+public TableInfo(byte[] tenantId, byte[] schema, byte[] name) {
+this.tenantId = tenantId;
+this.schema = schema;
+this.name = name;
+}
+
+public byte[] getRowKeyPrefix() {
+return SchemaUtil.getTableKey(tenantId, schema, name);
+}
+
+@Override
+public String toString() {
+return Bytes.toStringBinary(getRowKeyPrefix());
+}
+
+public byte[] getTenantId() {
+return tenantId;
+}
+
+public byte[] getSchemaName() {
+return schema;
+}
+
+public byte[] getTableName() {
+return name;
+}
+
+@Override
+public int hashCode() {
+final int prime = 31;
+int result = 1;
+result = prime * result + Arrays.hashCode(name);
+result = prime * result + Arrays.hashCode(schema);
+result = prime * result + Arrays.hashCode(tenantId);
+return result;
+}
+
+@Override
+public boolean equals(Object obj) {
+if (this == obj) return true;
+if (obj == null) return false;
+if (getClass() != obj.getClass()) return false;
+TableInfo other = (TableInfo) obj;
+if (!Arrays.equals(name, other.name)) return false;
+if (!Arrays.equals(schema, other.schema)) return false;
+if (!Arrays.equals(tenantId, other.tenantId)) return false;
+return true;
+}
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableViewFinderResult.java
--
diff --git 

[06/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index e968e99..4433e12 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -35,8 +35,6 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
-import jline.internal.Log;
-
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.MetaTableAccessor;
 import org.apache.hadoop.hbase.TableName;
@@ -75,25 +73,27 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.primitives.Doubles;
 
+import jline.internal.Log;
+
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
 
 protected final boolean localIndex;
 private final String tableDDLOptions;
-   
+
 public MutableIndexIT(Boolean localIndex, String txProvider, Boolean 
columnEncoded) {
-   this.localIndex = localIndex;
-   StringBuilder optionBuilder = new StringBuilder();
-   if (txProvider != null) {
-   optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
-   }
-   if (!columnEncoded) {
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+if (txProvider != null) {
+optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
+}
+if (!columnEncoded) {
 if (optionBuilder.length()!=0)
 optionBuilder.append(",");
 optionBuilder.append("COLUMN_ENCODED_BYTES=0");
 }
-   this.tableDDLOptions = optionBuilder.toString();
-   }
+this.tableDDLOptions = optionBuilder.toString();
+}
 
 private static Connection getConnection(Properties props) throws 
SQLException {
 
props.setProperty(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(1));
@@ -106,7 +106,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 return getConnection(props);
 }
 
-   
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
+
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
 public static Collection data() {
 return Arrays.asList(new Object[][] { 
 { false, null, false }, { false, null, true },
@@ -121,16 +121,16 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testCoveredColumnUpdates() throws Exception {
 try (Connection conn = getConnection()) {
-   conn.setAutoCommit(false);
-   String tableName = "TBL_" + generateUniqueName();
-   String indexName = "IDX_" + generateUniqueName();
-   String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
-   String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+conn.setAutoCommit(false);
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
 
-   TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
+TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
 populateMultiCFTestTable(fullTableName);
 conn.createStatement().execute("CREATE " + (localIndex ? " LOCAL " 
: "") + " INDEX " + indexName + " ON " + fullTableName 
-   + " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
++ " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
 
 String query = "SELECT char_col1, int_col1, long_col2 from " + 
fullTableName;
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + 
query);
@@ -203,7 +203,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 query = "SELECT b.* from " + fullTableName + " where int_col1 
= 4";
 rs = 

[01/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.11 6b89aa291 -> 3987c1230


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index b127408..9d5583b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -82,12 +82,12 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
@@ -96,6 +96,9 @@ import org.apache.phoenix.coprocessor.MetaDataEndpointImpl;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
@@ -175,11 +178,6 @@ public class UpgradeUtil {
 private static final String DELETE_LINK = "DELETE FROM " + 
SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE
 + " WHERE (" + TABLE_SCHEM + "=? OR (" + TABLE_SCHEM + " IS NULL 
AND ? IS NULL)) AND " + TABLE_NAME + "=? AND " + COLUMN_FAMILY + "=? AND " + 
LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue();
 
-private static final String GET_VIEWS_QUERY = "SELECT " + TENANT_ID + "," 
+ TABLE_SCHEM + "," + TABLE_NAME
-+ " FROM " + SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE + 
" WHERE " + COLUMN_FAMILY + " = ? AND "
-+ LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue() 
+ " AND ( " + TABLE_TYPE + "=" + "'"
-+ PTableType.VIEW.getSerializedValue() + "' OR " + TABLE_TYPE + " 
IS NULL) ORDER BY "+TENANT_ID;
-
 private UpgradeUtil() {
 }
 
@@ -225,8 +223,8 @@ public class UpgradeUtil {
 scan.setRaw(true);
 scan.setMaxVersions();
 ResultScanner scanner = null;
-HTableInterface source = null;
-HTableInterface target = null;
+Table source = null;
+Table target = null;
 try {
 source = conn.getQueryServices().getTable(sourceName);
 target = conn.getQueryServices().getTable(targetName);
@@ -646,7 +644,7 @@ public class UpgradeUtil {
 logger.info("Upgrading SYSTEM.SEQUENCE table");
 
 byte[] seqTableKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_SCHEMA, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE);
-HTableInterface sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
+Table sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
 try {
 logger.info("Setting SALT_BUCKETS property of SYSTEM.SEQUENCE to " 
+ SaltingUtil.MAX_BUCKET_NUM);
 KeyValue saltKV = KeyValueUtil.newKeyValue(seqTableKey, 
@@ -699,7 +697,7 @@ public class UpgradeUtil {
 Scan scan = new Scan();
 scan.setRaw(true);
 scan.setMaxVersions();
-HTableInterface seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
+Table seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
 try {
 boolean committed = false;
 logger.info("Adding salt byte to all SYSTEM.SEQUENCE 
rows");
@@ -1149,6 +1147,78 @@ public class UpgradeUtil {
 }
 }
 
+/**
+ * Move child links form SYSTEM.CATALOG to SYSTEM.CHILD_LINK
+ * @param oldMetaConnection caller should take care of closing the passed 
connection appropriately
+ * @throws SQLException
+ */
+public static void moveChildLinks(PhoenixConnection oldMetaConnection) 
throws SQLException {
+PhoenixConnection metaConnection = 

[07/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 34292ba..fdfd75b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -28,172 +28,119 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
+import java.math.BigDecimal;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.exception.PhoenixIOException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Maps;
 
+@RunWith(Parameterized.class)
+public class ViewIT extends SplitSystemCatalogIT {
 
-public class ViewIT extends BaseViewIT {
-   
-public ViewIT(boolean transactional) {
-   super(transactional);
-   }
-
-@Test
-public void testReadOnlyOnReadOnlyView() throws Exception {
-Connection earlierCon = DriverManager.getConnection(getUrl());
-Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE TABLE " + fullTableName + " (k INTEGER NOT NULL 
PRIMARY KEY, v1 DATE) "+ tableDDLOptions;
-conn.createStatement().execute(ddl);
-String fullParentViewName = "V_" + generateUniqueName();
-ddl = "CREATE VIEW " + fullParentViewName + " (v2 VARCHAR) AS SELECT * 
FROM " + fullTableName + " WHERE k > 5";
-conn.createStatement().execute(ddl);
-try {
-conn.createStatement().execute("UPSERT INTO " + fullParentViewName 
+ " VALUES(1)");
-fail();
-} catch (ReadOnlyTableException e) {
-
-}
-for (int i = 0; i < 10; i++) {
-conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + i + ")");
-}
-conn.commit();
-
-analyzeTable(conn, fullParentViewName, transactional);
-
-List splits = getAllSplits(conn, fullParentViewName);
-assertEquals(4, splits.size());
-
-int count = 0;
-ResultSet rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullTableName);
-while (rs.next()) {
-assertEquals(count++, rs.getInt(1));
-}
-assertEquals(10, count);
-
-count = 0;
-rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullParentViewName);
-while (rs.next()) {
-

[08/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
new file mode 100644
index 000..51d3b86
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Base class for tests that run with split SYSTEM.CATALOG.
+ * 
+ */
+@Category(SplitSystemCatalogTests.class)
+public class SplitSystemCatalogIT extends BaseTest {
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+protected static String TENANT1 = "tenant1";
+protected static String TENANT2 = "tenant2";
+
+@BeforeClass
+public static void doSetup() throws Exception {
+NUM_SLAVES_BASE = 6;
+Map props = Collections.emptyMap();
+boolean splitSystemCatalog = (driver == null);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+// Split SYSTEM.CATALOG once after the mini-cluster is started
+if (splitSystemCatalog) {
+splitSystemCatalog();
+}
+}
+
+protected static void splitSystemCatalog() throws SQLException, Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+}
+String tableName = "TABLE";
+String fullTableName1 = SchemaUtil.getTableName(SCHEMA1, tableName);
+String fullTableName2 = SchemaUtil.getTableName(SCHEMA2, tableName);
+String fullTableName3 = SchemaUtil.getTableName(SCHEMA3, tableName);
+String fullTableName4 = SchemaUtil.getTableName(SCHEMA4, tableName);
+ArrayList tableList = Lists.newArrayList(fullTableName1, 
fullTableName2, fullTableName3);
+Map> tenantToTableMap = Maps.newHashMap();
+tenantToTableMap.put(null, tableList);
+tenantToTableMap.put(TENANT1, Lists.newArrayList(fullTableName2, 
fullTableName3));
+tenantToTableMap.put(TENANT2, Lists.newArrayList(fullTableName4));
+splitSystemCatalog(tenantToTableMap);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
new file mode 100644
index 000..27fc5c6
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
@@ -0,0 +1,11 @@
+package org.apache.phoenix.end2end;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.TYPE)
+public @interface SplitSystemCatalogTests {
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
 

[05/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index ae2fa66..5e8a5dc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -17,8 +17,6 @@
  */
 package org.apache.phoenix.coprocessor;
 
-import static com.google.common.base.Preconditions.checkArgument;
-import static com.google.common.base.Preconditions.checkState;
 import static org.apache.hadoop.hbase.KeyValueUtil.createFirstOnRow;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.APPEND_ONLY_SCHEMA_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ARRAY_SIZE_BYTES;
@@ -55,7 +53,6 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MULTI_TENANT_BYTES
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NULLABLE_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NUM_ARGS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PK_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
@@ -78,9 +75,8 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID_BYTE
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_STATEMENT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_TYPE_BYTES;
 import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_COUNT;
-import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
-import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static org.apache.phoenix.schema.PTableType.TABLE;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
@@ -91,14 +87,16 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.ListIterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.NavigableMap;
+import java.util.Properties;
 import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
@@ -108,26 +106,21 @@ import org.apache.hadoop.hbase.Coprocessor;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.ipc.RpcServer.Call;
 import org.apache.hadoop.hbase.ipc.RpcUtil;
@@ -140,6 +133,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.VersionInfo;
 import org.apache.phoenix.cache.GlobalCache;
 import org.apache.phoenix.cache.GlobalCache.FunctionBytesPtr;
+import org.apache.phoenix.compile.ColumnNameTrackingExpressionCompiler;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.compile.QueryPlan;
@@ -183,6 +177,7 @@ import 

[02/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
index 45aca98..a267629 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.schema;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.query.QueryConstants;
@@ -42,36 +43,63 @@ public class PColumnImpl implements PColumn {
 private boolean isRowTimestamp;
 private boolean isDynamic;
 private byte[] columnQualifierBytes;
-
+private boolean derived;
+private long timestamp;
+
 public PColumnImpl() {
 }
 
-public PColumnImpl(PName name,
-   PName familyName,
-   PDataType dataType,
-   Integer maxLength,
-   Integer scale,
-   boolean nullable,
-   int position,
-   SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic, byte[] columnQualifierBytes) {
-init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes);
+public PColumnImpl(PColumn column, int position) {
+this(column, column.isDerived(), position);
 }
 
-public PColumnImpl(PColumn column, int position) {
+public PColumnImpl(PColumn column, byte[] viewConstant, boolean 
isViewReferenced) {
+this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
+column.getScale(), column.isNullable(), column.getPosition(), 
column.getSortOrder(), column.getArraySize(), viewConstant, isViewReferenced, 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getColumnQualifierBytes(),
+column.getTimestamp(), column.isDerived());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position) {
+this(column, derivedColumn, position, column.getViewConstant());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position, 
byte[] viewConstant) {
 this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
-column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), column.getViewConstant(), 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes());
+column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), viewConstant, 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes(),
+column.getTimestamp(), derivedColumn);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp) {
+this(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, false);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp, boolean derived) {
+init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, derived);
+}
+
+private PColumnImpl(PName familyName, PName columnName, Long timestamp) {
+this.familyName = familyName;
+this.name = columnName;
+this.derived = true;
+if (timestamp!=null) {
+this.timestamp = timestamp;
+}
 }
 
-private void init(PName name,
-PName familyName,
-PDataType 

[09/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index ab3a4ab..e39d492 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -21,6 +21,8 @@ import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -33,37 +35,46 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Properties;
+import java.util.List;
 
 import org.apache.commons.lang.ArrayUtils;
-import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.TephraTransactionalProcessor;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
-import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.StringUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
 @RunWith(Parameterized.class)
-public class AlterTableWithViewsIT extends ParallelStatsDisabledIT {
-
+public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
+
 private final boolean isMultiTenant;
 private final boolean columnEncoded;
-
-private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant2";
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
+private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT2;
 
 public AlterTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) 
{
 this.isMultiTenant = isMultiTenant;
@@ -77,6 +88,14 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 { true, false }, { true, true } });
 }
 
+// transform PColumn to String
+private Function function = new Function(){
+@Override
+public String apply(PColumn input) {
+return input.getName().getString();
+}
+};
+
 private String generateDDL(String format) {
 return generateDDL("", format);
 }
@@ -101,8 +120,9 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 public void testAddNewColumnsToBaseTableWithViews() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl());
 Connection viewConn = isMultiTenant ? 
DriverManager.getConnection(TENANT_SPECIFIC_URL1) : conn ) {   
-String tableName = generateUniqueName();
-String viewOfTable = tableName + "_VIEW";
+String tableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String viewOfTable = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+
 String ddlFormat = "CREATE TABLE IF NOT EXISTS " + tableName + " ("
 + " %s ID char(1) NOT NULL,"
 + " COL1 integer NOT NULL,"
@@ -113,12 +133,13 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {

[03/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/3987c123/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 8dd4a88..dab1048 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -29,9 +29,10 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.ColumnProjector;
 import org.apache.phoenix.compile.ExpressionProjector;
@@ -40,7 +41,12 @@ import org.apache.phoenix.compile.StatementContext;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.LikeExpression;
+import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
+import org.apache.phoenix.expression.StringBasedLikeExpression;
 import org.apache.phoenix.expression.function.ExternalSqlTypeIdFunction;
 import org.apache.phoenix.expression.function.IndexStateNameFunction;
 import org.apache.phoenix.expression.function.SQLIndexTypeFunction;
@@ -48,25 +54,33 @@ import 
org.apache.phoenix.expression.function.SQLTableTypeFunction;
 import org.apache.phoenix.expression.function.SQLViewTypeFunction;
 import org.apache.phoenix.expression.function.SqlTypeNameFunction;
 import org.apache.phoenix.expression.function.TransactionProviderNameFunction;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
-import org.apache.phoenix.iterate.DelegateResultIterator;
 import org.apache.phoenix.iterate.MaterializedResultIterator;
 import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.parse.LikeParseNode.LikeType;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.MetaDataClient;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PColumnImpl;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.RowKeyValueAccessor;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
 import org.apache.phoenix.schema.tuple.SingleKeyValueTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PBoolean;
 import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PVarbinary;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.KeyValueUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 
@@ -336,6 +350,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final byte[] COLUMN_QUALIFIER_COUNTER_BYTES = 
Bytes.toBytes(COLUMN_QUALIFIER_COUNTER);
 public static final String USE_STATS_FOR_PARALLELIZATION = 
"USE_STATS_FOR_PARALLELIZATION";
 public static final byte[] USE_STATS_FOR_PARALLELIZATION_BYTES = 
Bytes.toBytes(USE_STATS_FOR_PARALLELIZATION);
+
+public static final String SYSTEM_CHILD_LINK_TABLE = "CHILD_LINK";
+public static final String SYSTEM_CHILD_LINK_NAME = 
SchemaUtil.getTableName(SYSTEM_CATALOG_SCHEMA, SYSTEM_CHILD_LINK_TABLE);
+public static final byte[] SYSTEM_CHILD_LINK_NAME_BYTES = 
Bytes.toBytes(SYSTEM_CHILD_LINK_NAME);
+public static final TableName SYSTEM_LINK_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_CHILD_LINK_NAME);
 
 
 //SYSTEM:LOG
@@ -467,179 +486,352 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 private static void appendConjunction(StringBuilder buf) {
 buf.append(buf.length() == 0 ? "" : " and ");
 }
-
+
+private static final PColumnImpl TENANT_ID_COLUMN = 

[01/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 4cab4c270 -> 93fdd5bad


http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index b127408..9d5583b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -82,12 +82,12 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
@@ -96,6 +96,9 @@ import org.apache.phoenix.coprocessor.MetaDataEndpointImpl;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
@@ -175,11 +178,6 @@ public class UpgradeUtil {
 private static final String DELETE_LINK = "DELETE FROM " + 
SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE
 + " WHERE (" + TABLE_SCHEM + "=? OR (" + TABLE_SCHEM + " IS NULL 
AND ? IS NULL)) AND " + TABLE_NAME + "=? AND " + COLUMN_FAMILY + "=? AND " + 
LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue();
 
-private static final String GET_VIEWS_QUERY = "SELECT " + TENANT_ID + "," 
+ TABLE_SCHEM + "," + TABLE_NAME
-+ " FROM " + SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE + 
" WHERE " + COLUMN_FAMILY + " = ? AND "
-+ LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue() 
+ " AND ( " + TABLE_TYPE + "=" + "'"
-+ PTableType.VIEW.getSerializedValue() + "' OR " + TABLE_TYPE + " 
IS NULL) ORDER BY "+TENANT_ID;
-
 private UpgradeUtil() {
 }
 
@@ -225,8 +223,8 @@ public class UpgradeUtil {
 scan.setRaw(true);
 scan.setMaxVersions();
 ResultScanner scanner = null;
-HTableInterface source = null;
-HTableInterface target = null;
+Table source = null;
+Table target = null;
 try {
 source = conn.getQueryServices().getTable(sourceName);
 target = conn.getQueryServices().getTable(targetName);
@@ -646,7 +644,7 @@ public class UpgradeUtil {
 logger.info("Upgrading SYSTEM.SEQUENCE table");
 
 byte[] seqTableKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_SCHEMA, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE);
-HTableInterface sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
+Table sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
 try {
 logger.info("Setting SALT_BUCKETS property of SYSTEM.SEQUENCE to " 
+ SaltingUtil.MAX_BUCKET_NUM);
 KeyValue saltKV = KeyValueUtil.newKeyValue(seqTableKey, 
@@ -699,7 +697,7 @@ public class UpgradeUtil {
 Scan scan = new Scan();
 scan.setRaw(true);
 scan.setMaxVersions();
-HTableInterface seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
+Table seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
 try {
 boolean committed = false;
 logger.info("Adding salt byte to all SYSTEM.SEQUENCE 
rows");
@@ -1149,6 +1147,78 @@ public class UpgradeUtil {
 }
 }
 
+/**
+ * Move child links form SYSTEM.CATALOG to SYSTEM.CHILD_LINK
+ * @param oldMetaConnection caller should take care of closing the passed 
connection appropriately
+ * @throws SQLException
+ */
+public static void moveChildLinks(PhoenixConnection oldMetaConnection) 
throws SQLException {
+PhoenixConnection metaConnection = 

[10/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and 
Rahul Gidwani)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/93fdd5ba
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/93fdd5ba
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/93fdd5ba

Branch: refs/heads/4.x-HBase-1.3
Commit: 93fdd5bad22cde313c9f34fe7448dca44377a27c
Parents: 4cab4c2
Author: Thomas D'Silva 
Authored: Sat Jul 14 11:34:47 2018 -0700
Committer: Thomas D'Silva 
Committed: Wed Jul 18 21:46:59 2018 -0700

--
 .../StatisticsCollectionRunTrackerIT.java   |2 +-
 .../AlterMultiTenantTableWithViewsIT.java   |  284 +-
 .../apache/phoenix/end2end/AlterTableIT.java|   45 +-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  545 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |4 +-
 .../end2end/BaseTenantSpecificViewIndexIT.java  |   38 +-
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |   69 +-
 .../MigrateSystemTablesToSystemNamespaceIT.java |   38 +-
 .../apache/phoenix/end2end/PhoenixDriverIT.java |   37 +-
 .../end2end/QueryDatabaseMetaDataIT.java|9 +-
 .../apache/phoenix/end2end/SaltedViewIT.java|   45 -
 .../phoenix/end2end/SplitSystemCatalogIT.java   |   80 +
 .../end2end/SplitSystemCatalogTests.java|   11 +
 .../StatsEnabledSplitSystemCatalogIT.java   |  244 ++
 .../SystemCatalogCreationOnConnectionIT.java|   34 +-
 .../apache/phoenix/end2end/SystemCatalogIT.java |   31 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |   13 +-
 .../end2end/TenantSpecificViewIndexIT.java  |   68 +-
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  319 +--
 .../java/org/apache/phoenix/end2end/ViewIT.java |  868 --
 .../phoenix/end2end/index/BaseIndexIT.java  |   43 +-
 .../index/ChildViewsUseParentViewIndexIT.java   |7 +-
 .../phoenix/end2end/index/DropColumnIT.java |  117 -
 .../phoenix/end2end/index/IndexMetadataIT.java  |4 +-
 .../phoenix/end2end/index/MutableIndexIT.java   |  842 +++---
 .../phoenix/end2end/index/ViewIndexIT.java  |   68 +-
 .../apache/phoenix/execute/PartialCommitIT.java |4 +-
 .../SystemCatalogWALEntryFilterIT.java  |   85 +-
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   |9 +-
 .../ColumnNameTrackingExpressionCompiler.java   |   46 +
 .../phoenix/compile/CreateTableCompiler.java|2 +-
 .../apache/phoenix/compile/FromCompiler.java|   15 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../apache/phoenix/compile/UnionCompiler.java   |2 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   | 2577 +-
 .../phoenix/coprocessor/MetaDataProtocol.java   |3 +-
 .../apache/phoenix/coprocessor/TableInfo.java   |   79 +
 .../coprocessor/TableViewFinderResult.java  |   48 +
 .../apache/phoenix/coprocessor/ViewFinder.java  |  144 +
 .../coprocessor/WhereConstantParser.java|  106 +
 .../coprocessor/generated/MetaDataProtos.java   |  626 -
 .../coprocessor/generated/PTableProtos.java |  323 ++-
 .../phoenix/expression/LikeExpression.java  |2 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |8 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  534 ++--
 .../apache/phoenix/jdbc/PhoenixStatement.java   |8 +-
 .../phoenix/parse/DropTableStatement.java   |8 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |2 +-
 .../phoenix/query/ConnectionQueryServices.java  |   17 +-
 .../query/ConnectionQueryServicesImpl.java  |   43 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   13 +-
 .../query/DelegateConnectionQueryServices.java  |8 +-
 .../apache/phoenix/query/QueryConstants.java|   14 +-
 .../org/apache/phoenix/query/QueryServices.java |2 +
 .../phoenix/query/QueryServicesOptions.java |2 +
 .../SystemCatalogWALEntryFilter.java|   45 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   15 +
 .../apache/phoenix/schema/MetaDataClient.java   |   57 +-
 .../phoenix/schema/MetaDataSplitPolicy.java |   26 +-
 .../java/org/apache/phoenix/schema/PColumn.java |   12 +
 .../org/apache/phoenix/schema/PColumnImpl.java  |  113 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|3 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   17 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  279 +-
 .../org/apache/phoenix/schema/PTableKey.java|4 +-
 .../schema/ParentTableNotFoundException.java|   30 +
 .../org/apache/phoenix/schema/SaltingUtil.java  |4 +-
 .../apache/phoenix/schema/TableProperty.java|   22 +-
 .../java/org/apache/phoenix/util/IndexUtil.java |   16 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |  171 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |1 -
 

[06/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index e968e99..4433e12 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -35,8 +35,6 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
-import jline.internal.Log;
-
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.MetaTableAccessor;
 import org.apache.hadoop.hbase.TableName;
@@ -75,25 +73,27 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.primitives.Doubles;
 
+import jline.internal.Log;
+
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
 
 protected final boolean localIndex;
 private final String tableDDLOptions;
-   
+
 public MutableIndexIT(Boolean localIndex, String txProvider, Boolean 
columnEncoded) {
-   this.localIndex = localIndex;
-   StringBuilder optionBuilder = new StringBuilder();
-   if (txProvider != null) {
-   optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
-   }
-   if (!columnEncoded) {
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+if (txProvider != null) {
+optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
+}
+if (!columnEncoded) {
 if (optionBuilder.length()!=0)
 optionBuilder.append(",");
 optionBuilder.append("COLUMN_ENCODED_BYTES=0");
 }
-   this.tableDDLOptions = optionBuilder.toString();
-   }
+this.tableDDLOptions = optionBuilder.toString();
+}
 
 private static Connection getConnection(Properties props) throws 
SQLException {
 
props.setProperty(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(1));
@@ -106,7 +106,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 return getConnection(props);
 }
 
-   
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
+
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
 public static Collection data() {
 return Arrays.asList(new Object[][] { 
 { false, null, false }, { false, null, true },
@@ -121,16 +121,16 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testCoveredColumnUpdates() throws Exception {
 try (Connection conn = getConnection()) {
-   conn.setAutoCommit(false);
-   String tableName = "TBL_" + generateUniqueName();
-   String indexName = "IDX_" + generateUniqueName();
-   String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
-   String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+conn.setAutoCommit(false);
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
 
-   TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
+TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
 populateMultiCFTestTable(fullTableName);
 conn.createStatement().execute("CREATE " + (localIndex ? " LOCAL " 
: "") + " INDEX " + indexName + " ON " + fullTableName 
-   + " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
++ " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
 
 String query = "SELECT char_col1, int_col1, long_col2 from " + 
fullTableName;
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + 
query);
@@ -203,7 +203,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 query = "SELECT b.* from " + fullTableName + " where int_col1 
= 4";
 rs = 

[09/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index ab3a4ab..e39d492 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -21,6 +21,8 @@ import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -33,37 +35,46 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Properties;
+import java.util.List;
 
 import org.apache.commons.lang.ArrayUtils;
-import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.TephraTransactionalProcessor;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
-import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.StringUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
 @RunWith(Parameterized.class)
-public class AlterTableWithViewsIT extends ParallelStatsDisabledIT {
-
+public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
+
 private final boolean isMultiTenant;
 private final boolean columnEncoded;
-
-private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant2";
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
+private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT2;
 
 public AlterTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) 
{
 this.isMultiTenant = isMultiTenant;
@@ -77,6 +88,14 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 { true, false }, { true, true } });
 }
 
+// transform PColumn to String
+private Function function = new Function(){
+@Override
+public String apply(PColumn input) {
+return input.getName().getString();
+}
+};
+
 private String generateDDL(String format) {
 return generateDDL("", format);
 }
@@ -101,8 +120,9 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 public void testAddNewColumnsToBaseTableWithViews() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl());
 Connection viewConn = isMultiTenant ? 
DriverManager.getConnection(TENANT_SPECIFIC_URL1) : conn ) {   
-String tableName = generateUniqueName();
-String viewOfTable = tableName + "_VIEW";
+String tableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String viewOfTable = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+
 String ddlFormat = "CREATE TABLE IF NOT EXISTS " + tableName + " ("
 + " %s ID char(1) NOT NULL,"
 + " COL1 integer NOT NULL,"
@@ -113,12 +133,13 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {

[03/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 8dd4a88..dab1048 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -29,9 +29,10 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.ColumnProjector;
 import org.apache.phoenix.compile.ExpressionProjector;
@@ -40,7 +41,12 @@ import org.apache.phoenix.compile.StatementContext;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.LikeExpression;
+import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
+import org.apache.phoenix.expression.StringBasedLikeExpression;
 import org.apache.phoenix.expression.function.ExternalSqlTypeIdFunction;
 import org.apache.phoenix.expression.function.IndexStateNameFunction;
 import org.apache.phoenix.expression.function.SQLIndexTypeFunction;
@@ -48,25 +54,33 @@ import 
org.apache.phoenix.expression.function.SQLTableTypeFunction;
 import org.apache.phoenix.expression.function.SQLViewTypeFunction;
 import org.apache.phoenix.expression.function.SqlTypeNameFunction;
 import org.apache.phoenix.expression.function.TransactionProviderNameFunction;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
-import org.apache.phoenix.iterate.DelegateResultIterator;
 import org.apache.phoenix.iterate.MaterializedResultIterator;
 import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.parse.LikeParseNode.LikeType;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.MetaDataClient;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PColumnImpl;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.RowKeyValueAccessor;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
 import org.apache.phoenix.schema.tuple.SingleKeyValueTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PBoolean;
 import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PVarbinary;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.KeyValueUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 
@@ -336,6 +350,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final byte[] COLUMN_QUALIFIER_COUNTER_BYTES = 
Bytes.toBytes(COLUMN_QUALIFIER_COUNTER);
 public static final String USE_STATS_FOR_PARALLELIZATION = 
"USE_STATS_FOR_PARALLELIZATION";
 public static final byte[] USE_STATS_FOR_PARALLELIZATION_BYTES = 
Bytes.toBytes(USE_STATS_FOR_PARALLELIZATION);
+
+public static final String SYSTEM_CHILD_LINK_TABLE = "CHILD_LINK";
+public static final String SYSTEM_CHILD_LINK_NAME = 
SchemaUtil.getTableName(SYSTEM_CATALOG_SCHEMA, SYSTEM_CHILD_LINK_TABLE);
+public static final byte[] SYSTEM_CHILD_LINK_NAME_BYTES = 
Bytes.toBytes(SYSTEM_CHILD_LINK_NAME);
+public static final TableName SYSTEM_LINK_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_CHILD_LINK_NAME);
 
 
 //SYSTEM:LOG
@@ -467,179 +486,352 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 private static void appendConjunction(StringBuilder buf) {
 buf.append(buf.length() == 0 ? "" : " and ");
 }
-
+
+private static final PColumnImpl TENANT_ID_COLUMN = 

[02/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
index 45aca98..a267629 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.schema;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.query.QueryConstants;
@@ -42,36 +43,63 @@ public class PColumnImpl implements PColumn {
 private boolean isRowTimestamp;
 private boolean isDynamic;
 private byte[] columnQualifierBytes;
-
+private boolean derived;
+private long timestamp;
+
 public PColumnImpl() {
 }
 
-public PColumnImpl(PName name,
-   PName familyName,
-   PDataType dataType,
-   Integer maxLength,
-   Integer scale,
-   boolean nullable,
-   int position,
-   SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic, byte[] columnQualifierBytes) {
-init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes);
+public PColumnImpl(PColumn column, int position) {
+this(column, column.isDerived(), position);
 }
 
-public PColumnImpl(PColumn column, int position) {
+public PColumnImpl(PColumn column, byte[] viewConstant, boolean 
isViewReferenced) {
+this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
+column.getScale(), column.isNullable(), column.getPosition(), 
column.getSortOrder(), column.getArraySize(), viewConstant, isViewReferenced, 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getColumnQualifierBytes(),
+column.getTimestamp(), column.isDerived());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position) {
+this(column, derivedColumn, position, column.getViewConstant());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position, 
byte[] viewConstant) {
 this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
-column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), column.getViewConstant(), 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes());
+column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), viewConstant, 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes(),
+column.getTimestamp(), derivedColumn);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp) {
+this(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, false);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp, boolean derived) {
+init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, derived);
+}
+
+private PColumnImpl(PName familyName, PName columnName, Long timestamp) {
+this.familyName = familyName;
+this.name = columnName;
+this.derived = true;
+if (timestamp!=null) {
+this.timestamp = timestamp;
+}
 }
 
-private void init(PName name,
-PName familyName,
-PDataType 

[04/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index 883f96d..29cf2a3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -91,8 +91,9 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_12_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_13_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0 = 
MIN_TABLE_TIMESTAMP + 28;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = 
MIN_TABLE_TIMESTAMP + 29;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 // Version below which we should disallow usage of mutable secondary 
indexing.
 public static final int MUTABLE_SI_VERSION_THRESHOLD = 
VersionUtil.encodeVersion("0", "94", "10");
 public static final int MAX_LOCAL_SI_VERSION_DISALLOW = 
VersionUtil.encodeVersion("0", "98", "8");

http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
new file mode 100644
index 000..b1c5f65
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.util.SchemaUtil;
+
+public class TableInfo {
+
+private final byte[] tenantId;
+private final byte[] schema;
+private final byte[] name;
+
+public TableInfo(byte[] tenantId, byte[] schema, byte[] name) {
+this.tenantId = tenantId;
+this.schema = schema;
+this.name = name;
+}
+
+public byte[] getRowKeyPrefix() {
+return SchemaUtil.getTableKey(tenantId, schema, name);
+}
+
+@Override
+public String toString() {
+return Bytes.toStringBinary(getRowKeyPrefix());
+}
+
+public byte[] getTenantId() {
+return tenantId;
+}
+
+public byte[] getSchemaName() {
+return schema;
+}
+
+public byte[] getTableName() {
+return name;
+}
+
+@Override
+public int hashCode() {
+final int prime = 31;
+int result = 1;
+result = prime * result + Arrays.hashCode(name);
+result = prime * result + Arrays.hashCode(schema);
+result = prime * result + Arrays.hashCode(tenantId);
+return result;
+}
+
+@Override
+public boolean equals(Object obj) {
+if (this == obj) return true;
+if (obj == null) return false;
+if (getClass() != obj.getClass()) return false;
+TableInfo other = (TableInfo) obj;
+if (!Arrays.equals(name, other.name)) return false;
+if (!Arrays.equals(schema, other.schema)) return false;
+if (!Arrays.equals(tenantId, other.tenantId)) return false;
+return true;
+}
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableViewFinderResult.java
--
diff --git 

[08/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
new file mode 100644
index 000..51d3b86
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Base class for tests that run with split SYSTEM.CATALOG.
+ * 
+ */
+@Category(SplitSystemCatalogTests.class)
+public class SplitSystemCatalogIT extends BaseTest {
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+protected static String TENANT1 = "tenant1";
+protected static String TENANT2 = "tenant2";
+
+@BeforeClass
+public static void doSetup() throws Exception {
+NUM_SLAVES_BASE = 6;
+Map props = Collections.emptyMap();
+boolean splitSystemCatalog = (driver == null);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+// Split SYSTEM.CATALOG once after the mini-cluster is started
+if (splitSystemCatalog) {
+splitSystemCatalog();
+}
+}
+
+protected static void splitSystemCatalog() throws SQLException, Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+}
+String tableName = "TABLE";
+String fullTableName1 = SchemaUtil.getTableName(SCHEMA1, tableName);
+String fullTableName2 = SchemaUtil.getTableName(SCHEMA2, tableName);
+String fullTableName3 = SchemaUtil.getTableName(SCHEMA3, tableName);
+String fullTableName4 = SchemaUtil.getTableName(SCHEMA4, tableName);
+ArrayList tableList = Lists.newArrayList(fullTableName1, 
fullTableName2, fullTableName3);
+Map> tenantToTableMap = Maps.newHashMap();
+tenantToTableMap.put(null, tableList);
+tenantToTableMap.put(TENANT1, Lists.newArrayList(fullTableName2, 
fullTableName3));
+tenantToTableMap.put(TENANT2, Lists.newArrayList(fullTableName4));
+splitSystemCatalog(tenantToTableMap);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
new file mode 100644
index 000..27fc5c6
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
@@ -0,0 +1,11 @@
+package org.apache.phoenix.end2end;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.TYPE)
+public @interface SplitSystemCatalogTests {
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
 

[07/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 34292ba..fdfd75b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -28,172 +28,119 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
+import java.math.BigDecimal;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.exception.PhoenixIOException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Maps;
 
+@RunWith(Parameterized.class)
+public class ViewIT extends SplitSystemCatalogIT {
 
-public class ViewIT extends BaseViewIT {
-   
-public ViewIT(boolean transactional) {
-   super(transactional);
-   }
-
-@Test
-public void testReadOnlyOnReadOnlyView() throws Exception {
-Connection earlierCon = DriverManager.getConnection(getUrl());
-Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE TABLE " + fullTableName + " (k INTEGER NOT NULL 
PRIMARY KEY, v1 DATE) "+ tableDDLOptions;
-conn.createStatement().execute(ddl);
-String fullParentViewName = "V_" + generateUniqueName();
-ddl = "CREATE VIEW " + fullParentViewName + " (v2 VARCHAR) AS SELECT * 
FROM " + fullTableName + " WHERE k > 5";
-conn.createStatement().execute(ddl);
-try {
-conn.createStatement().execute("UPSERT INTO " + fullParentViewName 
+ " VALUES(1)");
-fail();
-} catch (ReadOnlyTableException e) {
-
-}
-for (int i = 0; i < 10; i++) {
-conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + i + ")");
-}
-conn.commit();
-
-analyzeTable(conn, fullParentViewName, transactional);
-
-List splits = getAllSplits(conn, fullParentViewName);
-assertEquals(4, splits.size());
-
-int count = 0;
-ResultSet rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullTableName);
-while (rs.next()) {
-assertEquals(count++, rs.getInt(1));
-}
-assertEquals(10, count);
-
-count = 0;
-rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullParentViewName);
-while (rs.next()) {
-

[05/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/93fdd5ba/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index ae2fa66..5e8a5dc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -17,8 +17,6 @@
  */
 package org.apache.phoenix.coprocessor;
 
-import static com.google.common.base.Preconditions.checkArgument;
-import static com.google.common.base.Preconditions.checkState;
 import static org.apache.hadoop.hbase.KeyValueUtil.createFirstOnRow;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.APPEND_ONLY_SCHEMA_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ARRAY_SIZE_BYTES;
@@ -55,7 +53,6 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MULTI_TENANT_BYTES
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NULLABLE_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NUM_ARGS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PK_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
@@ -78,9 +75,8 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID_BYTE
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_STATEMENT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_TYPE_BYTES;
 import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_COUNT;
-import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
-import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static org.apache.phoenix.schema.PTableType.TABLE;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
@@ -91,14 +87,16 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.ListIterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.NavigableMap;
+import java.util.Properties;
 import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
@@ -108,26 +106,21 @@ import org.apache.hadoop.hbase.Coprocessor;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.ipc.RpcServer.Call;
 import org.apache.hadoop.hbase.ipc.RpcUtil;
@@ -140,6 +133,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.VersionInfo;
 import org.apache.phoenix.cache.GlobalCache;
 import org.apache.phoenix.cache.GlobalCache.FunctionBytesPtr;
+import org.apache.phoenix.compile.ColumnNameTrackingExpressionCompiler;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.compile.QueryPlan;
@@ -183,6 +177,7 @@ import 

[04/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
index 883f96d..29cf2a3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataProtocol.java
@@ -91,8 +91,9 @@ public abstract class MetaDataProtocol extends 
MetaDataService {
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_12_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_13_0 = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_11_0;
 public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0 = 
MIN_TABLE_TIMESTAMP + 28;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0 = 
MIN_TABLE_TIMESTAMP + 29;
 // MIN_SYSTEM_TABLE_TIMESTAMP needs to be set to the max of all the 
MIN_SYSTEM_TABLE_TIMESTAMP_* constants
-public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_14_0;
+public static final long MIN_SYSTEM_TABLE_TIMESTAMP = 
MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 // Version below which we should disallow usage of mutable secondary 
indexing.
 public static final int MUTABLE_SI_VERSION_THRESHOLD = 
VersionUtil.encodeVersion("0", "94", "10");
 public static final int MAX_LOCAL_SI_VERSION_DISALLOW = 
VersionUtil.encodeVersion("0", "98", "8");

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
new file mode 100644
index 000..b1c5f65
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableInfo.java
@@ -0,0 +1,79 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import java.util.Arrays;
+
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.util.SchemaUtil;
+
+public class TableInfo {
+
+private final byte[] tenantId;
+private final byte[] schema;
+private final byte[] name;
+
+public TableInfo(byte[] tenantId, byte[] schema, byte[] name) {
+this.tenantId = tenantId;
+this.schema = schema;
+this.name = name;
+}
+
+public byte[] getRowKeyPrefix() {
+return SchemaUtil.getTableKey(tenantId, schema, name);
+}
+
+@Override
+public String toString() {
+return Bytes.toStringBinary(getRowKeyPrefix());
+}
+
+public byte[] getTenantId() {
+return tenantId;
+}
+
+public byte[] getSchemaName() {
+return schema;
+}
+
+public byte[] getTableName() {
+return name;
+}
+
+@Override
+public int hashCode() {
+final int prime = 31;
+int result = 1;
+result = prime * result + Arrays.hashCode(name);
+result = prime * result + Arrays.hashCode(schema);
+result = prime * result + Arrays.hashCode(tenantId);
+return result;
+}
+
+@Override
+public boolean equals(Object obj) {
+if (this == obj) return true;
+if (obj == null) return false;
+if (getClass() != obj.getClass()) return false;
+TableInfo other = (TableInfo) obj;
+if (!Arrays.equals(name, other.name)) return false;
+if (!Arrays.equals(schema, other.schema)) return false;
+if (!Arrays.equals(tenantId, other.tenantId)) return false;
+return true;
+}
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/TableViewFinderResult.java
--
diff --git 

[05/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index ae2fa66..5e8a5dc 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -17,8 +17,6 @@
  */
 package org.apache.phoenix.coprocessor;
 
-import static com.google.common.base.Preconditions.checkArgument;
-import static com.google.common.base.Preconditions.checkState;
 import static org.apache.hadoop.hbase.KeyValueUtil.createFirstOnRow;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.APPEND_ONLY_SCHEMA_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ARRAY_SIZE_BYTES;
@@ -55,7 +53,6 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.MULTI_TENANT_BYTES
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NULLABLE_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.NUM_ARGS_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ORDINAL_POSITION_BYTES;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PARENT_TENANT_ID_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.PK_NAME_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.RETURN_TYPE_BYTES;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SALT_BUCKETS_BYTES;
@@ -78,9 +75,8 @@ import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_INDEX_ID_BYTE
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_STATEMENT_BYTES;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.VIEW_TYPE_BYTES;
 import static 
org.apache.phoenix.query.QueryConstants.DIVERGED_VIEW_BASE_COLUMN_COUNT;
-import static org.apache.phoenix.query.QueryConstants.SEPARATOR_BYTE_ARRAY;
 import static org.apache.phoenix.schema.PTableType.INDEX;
-import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
+import static org.apache.phoenix.schema.PTableType.TABLE;
 import static org.apache.phoenix.util.SchemaUtil.getVarCharLength;
 import static org.apache.phoenix.util.SchemaUtil.getVarChars;
 
@@ -91,14 +87,16 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.ArrayList;
 import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
 import java.util.Comparator;
 import java.util.HashSet;
 import java.util.Iterator;
 import java.util.List;
+import java.util.ListIterator;
 import java.util.Map;
+import java.util.Map.Entry;
 import java.util.NavigableMap;
+import java.util.Properties;
 import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
@@ -108,26 +106,21 @@ import org.apache.hadoop.hbase.Coprocessor;
 import org.apache.hadoop.hbase.CoprocessorEnvironment;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.KeyValue.Type;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
-import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorException;
 import org.apache.hadoop.hbase.coprocessor.CoprocessorService;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
-import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
-import org.apache.hadoop.hbase.filter.Filter;
-import org.apache.hadoop.hbase.filter.FilterList;
 import org.apache.hadoop.hbase.filter.FirstKeyOnlyFilter;
-import org.apache.hadoop.hbase.filter.PageFilter;
-import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.ipc.RpcServer.Call;
 import org.apache.hadoop.hbase.ipc.RpcUtil;
@@ -140,6 +133,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.hadoop.hbase.util.VersionInfo;
 import org.apache.phoenix.cache.GlobalCache;
 import org.apache.phoenix.cache.GlobalCache.FunctionBytesPtr;
+import org.apache.phoenix.compile.ColumnNameTrackingExpressionCompiler;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.compile.QueryPlan;
@@ -183,6 +177,7 @@ import 

[09/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index ab3a4ab..e39d492 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -21,6 +21,8 @@ import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertNull;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
@@ -33,37 +35,46 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Properties;
+import java.util.List;
 
 import org.apache.commons.lang.ArrayUtils;
-import org.apache.hadoop.hbase.client.HTableInterface;
+import org.apache.hadoop.hbase.client.HTable;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.ResultScanner;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.coprocessor.TephraTransactionalProcessor;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
-import org.apache.phoenix.util.PropertiesUtil;
-import org.apache.phoenix.util.StringUtil;
-import org.apache.phoenix.util.TestUtil;
+import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Function;
+import com.google.common.collect.Lists;
+
 @RunWith(Parameterized.class)
-public class AlterTableWithViewsIT extends ParallelStatsDisabledIT {
-
+public class AlterTableWithViewsIT extends SplitSystemCatalogIT {
+
 private final boolean isMultiTenant;
 private final boolean columnEncoded;
-
-private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant1";
-private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=tenant2";
+private final String TENANT_SPECIFIC_URL1 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT1;
+private final String TENANT_SPECIFIC_URL2 = getUrl() + ';' + 
TENANT_ID_ATTRIB + "=" + TENANT2;
 
 public AlterTableWithViewsIT(boolean isMultiTenant, boolean columnEncoded) 
{
 this.isMultiTenant = isMultiTenant;
@@ -77,6 +88,14 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 { true, false }, { true, true } });
 }
 
+// transform PColumn to String
+private Function function = new Function(){
+@Override
+public String apply(PColumn input) {
+return input.getName().getString();
+}
+};
+
 private String generateDDL(String format) {
 return generateDDL("", format);
 }
@@ -101,8 +120,9 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {
 public void testAddNewColumnsToBaseTableWithViews() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl());
 Connection viewConn = isMultiTenant ? 
DriverManager.getConnection(TENANT_SPECIFIC_URL1) : conn ) {   
-String tableName = generateUniqueName();
-String viewOfTable = tableName + "_VIEW";
+String tableName = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+String viewOfTable = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+
 String ddlFormat = "CREATE TABLE IF NOT EXISTS " + tableName + " ("
 + " %s ID char(1) NOT NULL,"
 + " COL1 integer NOT NULL,"
@@ -113,12 +133,13 @@ public class AlterTableWithViewsIT extends 
ParallelStatsDisabledIT {

[07/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index 34292ba..fdfd75b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -28,172 +28,119 @@ import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
 
+import java.io.IOException;
+import java.math.BigDecimal;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
 import java.util.List;
+import java.util.Map;
 import java.util.Properties;
 
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HTableDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
+import org.apache.hadoop.hbase.client.Mutation;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
+import org.apache.hadoop.hbase.regionserver.MiniBatchOperationInProgress;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.QueryPlan;
+import org.apache.phoenix.exception.PhoenixIOException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.schema.ColumnAlreadyExistsException;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.SchemaUtil;
+import org.apache.phoenix.util.TestUtil;
+import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
 
+import com.google.common.base.Predicate;
+import com.google.common.collect.Collections2;
+import com.google.common.collect.Maps;
 
+@RunWith(Parameterized.class)
+public class ViewIT extends SplitSystemCatalogIT {
 
-public class ViewIT extends BaseViewIT {
-   
-public ViewIT(boolean transactional) {
-   super(transactional);
-   }
-
-@Test
-public void testReadOnlyOnReadOnlyView() throws Exception {
-Connection earlierCon = DriverManager.getConnection(getUrl());
-Connection conn = DriverManager.getConnection(getUrl());
-String ddl = "CREATE TABLE " + fullTableName + " (k INTEGER NOT NULL 
PRIMARY KEY, v1 DATE) "+ tableDDLOptions;
-conn.createStatement().execute(ddl);
-String fullParentViewName = "V_" + generateUniqueName();
-ddl = "CREATE VIEW " + fullParentViewName + " (v2 VARCHAR) AS SELECT * 
FROM " + fullTableName + " WHERE k > 5";
-conn.createStatement().execute(ddl);
-try {
-conn.createStatement().execute("UPSERT INTO " + fullParentViewName 
+ " VALUES(1)");
-fail();
-} catch (ReadOnlyTableException e) {
-
-}
-for (int i = 0; i < 10; i++) {
-conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES(" + i + ")");
-}
-conn.commit();
-
-analyzeTable(conn, fullParentViewName, transactional);
-
-List splits = getAllSplits(conn, fullParentViewName);
-assertEquals(4, splits.size());
-
-int count = 0;
-ResultSet rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullTableName);
-while (rs.next()) {
-assertEquals(count++, rs.getInt(1));
-}
-assertEquals(10, count);
-
-count = 0;
-rs = conn.createStatement().executeQuery("SELECT k FROM " + 
fullParentViewName);
-while (rs.next()) {
-

[06/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index e968e99..4433e12 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -35,8 +35,6 @@ import java.util.Collection;
 import java.util.List;
 import java.util.Properties;
 
-import jline.internal.Log;
-
 import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.MetaTableAccessor;
 import org.apache.hadoop.hbase.TableName;
@@ -75,25 +73,27 @@ import org.junit.runners.Parameterized.Parameters;
 
 import com.google.common.primitives.Doubles;
 
+import jline.internal.Log;
+
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
 
 protected final boolean localIndex;
 private final String tableDDLOptions;
-   
+
 public MutableIndexIT(Boolean localIndex, String txProvider, Boolean 
columnEncoded) {
-   this.localIndex = localIndex;
-   StringBuilder optionBuilder = new StringBuilder();
-   if (txProvider != null) {
-   optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
-   }
-   if (!columnEncoded) {
+this.localIndex = localIndex;
+StringBuilder optionBuilder = new StringBuilder();
+if (txProvider != null) {
+optionBuilder.append("TRANSACTIONAL=true," + 
PhoenixDatabaseMetaData.TRANSACTION_PROVIDER + "='" + txProvider + "'");
+}
+if (!columnEncoded) {
 if (optionBuilder.length()!=0)
 optionBuilder.append(",");
 optionBuilder.append("COLUMN_ENCODED_BYTES=0");
 }
-   this.tableDDLOptions = optionBuilder.toString();
-   }
+this.tableDDLOptions = optionBuilder.toString();
+}
 
 private static Connection getConnection(Properties props) throws 
SQLException {
 
props.setProperty(QueryServices.INDEX_MUTATE_BATCH_SIZE_THRESHOLD_ATTRIB, 
Integer.toString(1));
@@ -106,7 +106,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 return getConnection(props);
 }
 
-   
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
+
@Parameters(name="MutableIndexIT_localIndex={0},transactional={1},columnEncoded={2}")
 // name is used by failsafe as file name in reports
 public static Collection data() {
 return Arrays.asList(new Object[][] { 
 { false, null, false }, { false, null, true },
@@ -121,16 +121,16 @@ public class MutableIndexIT extends 
ParallelStatsDisabledIT {
 @Test
 public void testCoveredColumnUpdates() throws Exception {
 try (Connection conn = getConnection()) {
-   conn.setAutoCommit(false);
-   String tableName = "TBL_" + generateUniqueName();
-   String indexName = "IDX_" + generateUniqueName();
-   String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
-   String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
+conn.setAutoCommit(false);
+String tableName = "TBL_" + generateUniqueName();
+String indexName = "IDX_" + generateUniqueName();
+String fullTableName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, tableName);
+String fullIndexName = 
SchemaUtil.getTableName(TestUtil.DEFAULT_SCHEMA_NAME, indexName);
 
-   TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
+TestUtil.createMultiCFTestTable(conn, fullTableName, 
tableDDLOptions);
 populateMultiCFTestTable(fullTableName);
 conn.createStatement().execute("CREATE " + (localIndex ? " LOCAL " 
: "") + " INDEX " + indexName + " ON " + fullTableName 
-   + " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
++ " (char_col1 ASC, int_col1 ASC) INCLUDE (long_col1, 
long_col2)");
 
 String query = "SELECT char_col1, int_col1, long_col2 from " + 
fullTableName;
 ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + 
query);
@@ -203,7 +203,7 @@ public class MutableIndexIT extends ParallelStatsDisabledIT 
{
 query = "SELECT b.* from " + fullTableName + " where int_col1 
= 4";
 rs = 

[10/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and 
Rahul Gidwani)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4d6dbf9c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4d6dbf9c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4d6dbf9c

Branch: refs/heads/4.x-HBase-1.2
Commit: 4d6dbf9cb20a8299e9484c0d75aceb5e4862ea12
Parents: 766248b
Author: Thomas D'Silva 
Authored: Sat Jul 14 11:34:47 2018 -0700
Committer: Thomas D'Silva 
Committed: Wed Jul 18 18:16:04 2018 -0700

--
 .../StatisticsCollectionRunTrackerIT.java   |2 +-
 .../AlterMultiTenantTableWithViewsIT.java   |  284 +-
 .../apache/phoenix/end2end/AlterTableIT.java|   45 +-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  545 ++--
 .../phoenix/end2end/AppendOnlySchemaIT.java |4 +-
 .../end2end/BaseTenantSpecificViewIndexIT.java  |   38 +-
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |   69 +-
 .../MigrateSystemTablesToSystemNamespaceIT.java |   38 +-
 .../apache/phoenix/end2end/PhoenixDriverIT.java |   37 +-
 .../end2end/QueryDatabaseMetaDataIT.java|9 +-
 .../apache/phoenix/end2end/SaltedViewIT.java|   45 -
 .../phoenix/end2end/SplitSystemCatalogIT.java   |   80 +
 .../end2end/SplitSystemCatalogTests.java|   11 +
 .../StatsEnabledSplitSystemCatalogIT.java   |  244 ++
 .../SystemCatalogCreationOnConnectionIT.java|   34 +-
 .../apache/phoenix/end2end/SystemCatalogIT.java |   31 +-
 .../end2end/TenantSpecificTablesDDLIT.java  |   13 +-
 .../end2end/TenantSpecificViewIndexIT.java  |   68 +-
 .../org/apache/phoenix/end2end/UpgradeIT.java   |  319 +--
 .../java/org/apache/phoenix/end2end/ViewIT.java |  868 --
 .../phoenix/end2end/index/BaseIndexIT.java  |   43 +-
 .../index/ChildViewsUseParentViewIndexIT.java   |7 +-
 .../phoenix/end2end/index/DropColumnIT.java |  117 -
 .../phoenix/end2end/index/IndexMetadataIT.java  |4 +-
 .../phoenix/end2end/index/MutableIndexIT.java   |  842 +++---
 .../phoenix/end2end/index/ViewIndexIT.java  |   68 +-
 .../apache/phoenix/execute/PartialCommitIT.java |4 +-
 .../SystemCatalogWALEntryFilterIT.java  |   85 +-
 .../org/apache/phoenix/rpc/UpdateCacheIT.java   |9 +-
 .../ColumnNameTrackingExpressionCompiler.java   |   46 +
 .../phoenix/compile/CreateTableCompiler.java|2 +-
 .../apache/phoenix/compile/FromCompiler.java|   15 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |2 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |2 +-
 .../apache/phoenix/compile/UnionCompiler.java   |2 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |2 +-
 .../coprocessor/MetaDataEndpointImpl.java   | 2577 +-
 .../phoenix/coprocessor/MetaDataProtocol.java   |3 +-
 .../apache/phoenix/coprocessor/TableInfo.java   |   79 +
 .../coprocessor/TableViewFinderResult.java  |   48 +
 .../apache/phoenix/coprocessor/ViewFinder.java  |  144 +
 .../coprocessor/WhereConstantParser.java|  106 +
 .../coprocessor/generated/MetaDataProtos.java   |  626 -
 .../coprocessor/generated/PTableProtos.java |  323 ++-
 .../phoenix/expression/LikeExpression.java  |2 +-
 .../apache/phoenix/jdbc/PhoenixConnection.java  |8 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  534 ++--
 .../apache/phoenix/jdbc/PhoenixStatement.java   |8 +-
 .../phoenix/parse/DropTableStatement.java   |8 +-
 .../apache/phoenix/parse/ParseNodeFactory.java  |2 +-
 .../phoenix/query/ConnectionQueryServices.java  |   17 +-
 .../query/ConnectionQueryServicesImpl.java  |   43 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   13 +-
 .../query/DelegateConnectionQueryServices.java  |8 +-
 .../apache/phoenix/query/QueryConstants.java|   14 +-
 .../org/apache/phoenix/query/QueryServices.java |2 +
 .../phoenix/query/QueryServicesOptions.java |2 +
 .../SystemCatalogWALEntryFilter.java|   45 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   15 +
 .../apache/phoenix/schema/MetaDataClient.java   |   57 +-
 .../phoenix/schema/MetaDataSplitPolicy.java |   26 +-
 .../java/org/apache/phoenix/schema/PColumn.java |   12 +
 .../org/apache/phoenix/schema/PColumnImpl.java  |  113 +-
 .../apache/phoenix/schema/PMetaDataImpl.java|3 +-
 .../java/org/apache/phoenix/schema/PTable.java  |   17 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  279 +-
 .../org/apache/phoenix/schema/PTableKey.java|4 +-
 .../schema/ParentTableNotFoundException.java|   30 +
 .../org/apache/phoenix/schema/SaltingUtil.java  |4 +-
 .../apache/phoenix/schema/TableProperty.java|   22 +-
 .../java/org/apache/phoenix/util/IndexUtil.java |   16 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |  171 +-
 .../org/apache/phoenix/util/PhoenixRuntime.java |1 -
 

[08/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
new file mode 100644
index 000..51d3b86
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogIT.java
@@ -0,0 +1,80 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.SQLException;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.curator.shaded.com.google.common.collect.Lists;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.BeforeClass;
+import org.junit.experimental.categories.Category;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Base class for tests that run with split SYSTEM.CATALOG.
+ * 
+ */
+@Category(SplitSystemCatalogTests.class)
+public class SplitSystemCatalogIT extends BaseTest {
+
+protected static String SCHEMA1 = "SCHEMA1";
+protected static String SCHEMA2 = "SCHEMA2";
+protected static String SCHEMA3 = "SCHEMA3";
+protected static String SCHEMA4 = "SCHEMA4";
+
+protected static String TENANT1 = "tenant1";
+protected static String TENANT2 = "tenant2";
+
+@BeforeClass
+public static void doSetup() throws Exception {
+NUM_SLAVES_BASE = 6;
+Map props = Collections.emptyMap();
+boolean splitSystemCatalog = (driver == null);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+// Split SYSTEM.CATALOG once after the mini-cluster is started
+if (splitSystemCatalog) {
+splitSystemCatalog();
+}
+}
+
+protected static void splitSystemCatalog() throws SQLException, Exception {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+}
+String tableName = "TABLE";
+String fullTableName1 = SchemaUtil.getTableName(SCHEMA1, tableName);
+String fullTableName2 = SchemaUtil.getTableName(SCHEMA2, tableName);
+String fullTableName3 = SchemaUtil.getTableName(SCHEMA3, tableName);
+String fullTableName4 = SchemaUtil.getTableName(SCHEMA4, tableName);
+ArrayList tableList = Lists.newArrayList(fullTableName1, 
fullTableName2, fullTableName3);
+Map> tenantToTableMap = Maps.newHashMap();
+tenantToTableMap.put(null, tableList);
+tenantToTableMap.put(TENANT1, Lists.newArrayList(fullTableName2, 
fullTableName3));
+tenantToTableMap.put(TENANT2, Lists.newArrayList(fullTableName4));
+splitSystemCatalog(tenantToTableMap);
+}
+
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
new file mode 100644
index 000..27fc5c6
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SplitSystemCatalogTests.java
@@ -0,0 +1,11 @@
+package org.apache.phoenix.end2end;
+
+import java.lang.annotation.ElementType;
+import java.lang.annotation.Retention;
+import java.lang.annotation.RetentionPolicy;
+import java.lang.annotation.Target;
+
+@Retention(RetentionPolicy.RUNTIME)
+@Target(ElementType.TYPE)
+public @interface SplitSystemCatalogTests {
+}

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/StatsEnabledSplitSystemCatalogIT.java
 

[02/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
index 45aca98..a267629 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PColumnImpl.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.schema;
 
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.util.ByteStringer;
 import org.apache.phoenix.coprocessor.generated.PTableProtos;
 import org.apache.phoenix.query.QueryConstants;
@@ -42,36 +43,63 @@ public class PColumnImpl implements PColumn {
 private boolean isRowTimestamp;
 private boolean isDynamic;
 private byte[] columnQualifierBytes;
-
+private boolean derived;
+private long timestamp;
+
 public PColumnImpl() {
 }
 
-public PColumnImpl(PName name,
-   PName familyName,
-   PDataType dataType,
-   Integer maxLength,
-   Integer scale,
-   boolean nullable,
-   int position,
-   SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic, byte[] columnQualifierBytes) {
-init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes);
+public PColumnImpl(PColumn column, int position) {
+this(column, column.isDerived(), position);
 }
 
-public PColumnImpl(PColumn column, int position) {
+public PColumnImpl(PColumn column, byte[] viewConstant, boolean 
isViewReferenced) {
+this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
+column.getScale(), column.isNullable(), column.getPosition(), 
column.getSortOrder(), column.getArraySize(), viewConstant, isViewReferenced, 
column.getExpressionStr(), column.isRowTimestamp(), column.isDynamic(), 
column.getColumnQualifierBytes(),
+column.getTimestamp(), column.isDerived());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position) {
+this(column, derivedColumn, position, column.getViewConstant());
+}
+
+public PColumnImpl(PColumn column, boolean derivedColumn, int position, 
byte[] viewConstant) {
 this(column.getName(), column.getFamilyName(), column.getDataType(), 
column.getMaxLength(),
-column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), column.getViewConstant(), 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes());
+column.getScale(), column.isNullable(), position, 
column.getSortOrder(), column.getArraySize(), viewConstant, 
column.isViewReferenced(), column.getExpressionStr(), column.isRowTimestamp(), 
column.isDynamic(), column.getColumnQualifierBytes(),
+column.getTimestamp(), derivedColumn);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp) {
+this(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, false);
+}
+
+public PColumnImpl(PName name, PName familyName, PDataType dataType, 
Integer maxLength, Integer scale, boolean nullable,
+int position, SortOrder sortOrder, Integer arrSize, byte[] 
viewConstant, boolean isViewReferenced, String expressionStr, boolean 
isRowTimestamp, boolean isDynamic,
+byte[] columnQualifierBytes, long timestamp, boolean derived) {
+init(name, familyName, dataType, maxLength, scale, nullable, position, 
sortOrder, arrSize, viewConstant, isViewReferenced, expressionStr, 
isRowTimestamp, isDynamic, columnQualifierBytes, timestamp, derived);
+}
+
+private PColumnImpl(PName familyName, PName columnName, Long timestamp) {
+this.familyName = familyName;
+this.name = columnName;
+this.derived = true;
+if (timestamp!=null) {
+this.timestamp = timestamp;
+}
 }
 
-private void init(PName name,
-PName familyName,
-PDataType 

[03/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 8dd4a88..dab1048 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -29,9 +29,10 @@ import java.util.Collections;
 import java.util.List;
 
 import org.apache.hadoop.hbase.Cell;
-import org.apache.hadoop.hbase.KeyValue;
+import org.apache.hadoop.hbase.CellComparator;
+import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.compile.ColumnProjector;
 import org.apache.phoenix.compile.ExpressionProjector;
@@ -40,7 +41,12 @@ import org.apache.phoenix.compile.StatementContext;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.expression.KeyValueColumnExpression;
+import org.apache.phoenix.expression.LikeExpression;
+import org.apache.phoenix.expression.LiteralExpression;
 import org.apache.phoenix.expression.RowKeyColumnExpression;
+import org.apache.phoenix.expression.StringBasedLikeExpression;
 import org.apache.phoenix.expression.function.ExternalSqlTypeIdFunction;
 import org.apache.phoenix.expression.function.IndexStateNameFunction;
 import org.apache.phoenix.expression.function.SQLIndexTypeFunction;
@@ -48,25 +54,33 @@ import 
org.apache.phoenix.expression.function.SQLTableTypeFunction;
 import org.apache.phoenix.expression.function.SQLViewTypeFunction;
 import org.apache.phoenix.expression.function.SqlTypeNameFunction;
 import org.apache.phoenix.expression.function.TransactionProviderNameFunction;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
-import org.apache.phoenix.iterate.DelegateResultIterator;
 import org.apache.phoenix.iterate.MaterializedResultIterator;
 import org.apache.phoenix.iterate.ResultIterator;
+import org.apache.phoenix.parse.LikeParseNode.LikeType;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.MetaDataClient;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PColumnImpl;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.LinkType;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.RowKeyValueAccessor;
 import org.apache.phoenix.schema.SortOrder;
-import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.tuple.MultiKeyValueTuple;
 import org.apache.phoenix.schema.tuple.SingleKeyValueTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PBoolean;
 import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PInteger;
+import org.apache.phoenix.schema.types.PSmallint;
+import org.apache.phoenix.schema.types.PVarbinary;
 import org.apache.phoenix.schema.types.PVarchar;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.KeyValueUtil;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
 
@@ -336,6 +350,11 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 public static final byte[] COLUMN_QUALIFIER_COUNTER_BYTES = 
Bytes.toBytes(COLUMN_QUALIFIER_COUNTER);
 public static final String USE_STATS_FOR_PARALLELIZATION = 
"USE_STATS_FOR_PARALLELIZATION";
 public static final byte[] USE_STATS_FOR_PARALLELIZATION_BYTES = 
Bytes.toBytes(USE_STATS_FOR_PARALLELIZATION);
+
+public static final String SYSTEM_CHILD_LINK_TABLE = "CHILD_LINK";
+public static final String SYSTEM_CHILD_LINK_NAME = 
SchemaUtil.getTableName(SYSTEM_CATALOG_SCHEMA, SYSTEM_CHILD_LINK_TABLE);
+public static final byte[] SYSTEM_CHILD_LINK_NAME_BYTES = 
Bytes.toBytes(SYSTEM_CHILD_LINK_NAME);
+public static final TableName SYSTEM_LINK_HBASE_TABLE_NAME = 
TableName.valueOf(SYSTEM_CHILD_LINK_NAME);
 
 
 //SYSTEM:LOG
@@ -467,179 +486,352 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 private static void appendConjunction(StringBuilder buf) {
 buf.append(buf.length() == 0 ? "" : " and ");
 }
-
+
+private static final PColumnImpl TENANT_ID_COLUMN = 

[01/10] phoenix git commit: PHOENIX-3534 Support multi region SYSTEM.CATALOG table (Thomas D'Silva and Rahul Gidwani)

2018-07-19 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 766248ba9 -> 4d6dbf9cb


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4d6dbf9c/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
index b127408..9d5583b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/UpgradeUtil.java
@@ -82,12 +82,12 @@ import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Get;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.client.Put;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.regionserver.LocalIndexSplitter;
 import org.apache.hadoop.hbase.snapshot.SnapshotCreationException;
@@ -96,6 +96,9 @@ import org.apache.phoenix.coprocessor.MetaDataEndpointImpl;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MetaDataMutationResult;
 import org.apache.phoenix.coprocessor.MetaDataProtocol.MutationCode;
+import org.apache.phoenix.coprocessor.TableInfo;
+import org.apache.phoenix.coprocessor.TableViewFinderResult;
+import org.apache.phoenix.coprocessor.ViewFinder;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
@@ -175,11 +178,6 @@ public class UpgradeUtil {
 private static final String DELETE_LINK = "DELETE FROM " + 
SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE
 + " WHERE (" + TABLE_SCHEM + "=? OR (" + TABLE_SCHEM + " IS NULL 
AND ? IS NULL)) AND " + TABLE_NAME + "=? AND " + COLUMN_FAMILY + "=? AND " + 
LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue();
 
-private static final String GET_VIEWS_QUERY = "SELECT " + TENANT_ID + "," 
+ TABLE_SCHEM + "," + TABLE_NAME
-+ " FROM " + SYSTEM_CATALOG_SCHEMA + "." + SYSTEM_CATALOG_TABLE + 
" WHERE " + COLUMN_FAMILY + " = ? AND "
-+ LINK_TYPE + " = " + LinkType.PHYSICAL_TABLE.getSerializedValue() 
+ " AND ( " + TABLE_TYPE + "=" + "'"
-+ PTableType.VIEW.getSerializedValue() + "' OR " + TABLE_TYPE + " 
IS NULL) ORDER BY "+TENANT_ID;
-
 private UpgradeUtil() {
 }
 
@@ -225,8 +223,8 @@ public class UpgradeUtil {
 scan.setRaw(true);
 scan.setMaxVersions();
 ResultScanner scanner = null;
-HTableInterface source = null;
-HTableInterface target = null;
+Table source = null;
+Table target = null;
 try {
 source = conn.getQueryServices().getTable(sourceName);
 target = conn.getQueryServices().getTable(targetName);
@@ -646,7 +644,7 @@ public class UpgradeUtil {
 logger.info("Upgrading SYSTEM.SEQUENCE table");
 
 byte[] seqTableKey = SchemaUtil.getTableKey(null, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_SCHEMA, 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE);
-HTableInterface sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
+Table sysTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES);
 try {
 logger.info("Setting SALT_BUCKETS property of SYSTEM.SEQUENCE to " 
+ SaltingUtil.MAX_BUCKET_NUM);
 KeyValue saltKV = KeyValueUtil.newKeyValue(seqTableKey, 
@@ -699,7 +697,7 @@ public class UpgradeUtil {
 Scan scan = new Scan();
 scan.setRaw(true);
 scan.setMaxVersions();
-HTableInterface seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
+Table seqTable = 
conn.getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_NAME_BYTES);
 try {
 boolean committed = false;
 logger.info("Adding salt byte to all SYSTEM.SEQUENCE 
rows");
@@ -1149,6 +1147,78 @@ public class UpgradeUtil {
 }
 }
 
+/**
+ * Move child links form SYSTEM.CATALOG to SYSTEM.CHILD_LINK
+ * @param oldMetaConnection caller should take care of closing the passed 
connection appropriately
+ * @throws SQLException
+ */
+public static void moveChildLinks(PhoenixConnection oldMetaConnection) 
throws SQLException {
+PhoenixConnection metaConnection = 

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #701

2018-07-19 Thread Apache Jenkins Server
See 


--
[...truncated 37.18 KB...]
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[52,9]
 cannot find symbol
  symbol:   class HBaseRpcController
  location: class 
org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory
[ERROR] 
:[180,14]
 cannot find symbol
  symbol: class MetricRegistry
[ERROR] 
:[179,7]
 method does not override or implement a method from a supertype
[ERROR] 
:[454,78]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] 
:[432,17]
 cannot find symbol
  symbol: class HBaseRpcController
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure: 
[ERROR] 
:[34,39]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: package org.apache.hadoop.hbase.metrics
[ERROR] 
:[144,16]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: class 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment
[ERROR] 
:[24,35]
 cannot find symbol
[ERROR]   symbol:   class DelegatingHBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[25,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[37,37]
 cannot find symbol
[ERROR]   symbol: class DelegatingHBaseRpcController
[ERROR] 
:[56,38]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.MetadataRpcController
[ERROR] 
:[26,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[40,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[46,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR]