Apache-Phoenix | 4.x-HBase-1.2 | Build Successful

2018-01-31 Thread Apache Jenkins Server
4.x-HBase-1.2 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.2

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.2/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.2/lastCompletedBuild/testReport/

Changes
[vincentpoon] PHOENIX-4130 Avoid server retries for mutable indexes

[vincentpoon] PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Jenkins build is back to normal : Phoenix-4.x-HBase-1.2 #254

2018-01-31 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : Phoenix-4.x-HBase-1.3 #26

2018-01-31 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Phoenix | Master #1924

2018-01-31 Thread Apache Jenkins Server
See 


Changes:

[vincentpoon] PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)

--
[...truncated 110.19 KB...]
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 496.603 
s - in org.apache.phoenix.end2end.index.LocalMutableTxIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 355.683 
s - in org.apache.phoenix.end2end.join.HashJoinNoIndexIT
[INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 747.285 
s - in org.apache.phoenix.end2end.index.MutableIndexIT
[INFO] Running org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 370.371 
s - in org.apache.phoenix.end2end.join.SortMergeJoinNoIndexIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 623.695 
s - in org.apache.phoenix.end2end.join.HashJoinGlobalIndexIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.446 s 
- in org.apache.phoenix.end2end.salted.SaltedTableIT
[INFO] Running org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.277 s 
- in org.apache.phoenix.iterate.PhoenixQueryTimeoutIT
[INFO] Running org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 50.072 s 
- in org.apache.phoenix.end2end.salted.SaltedTableUpsertSelectIT
[INFO] Tests run: 21, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 388.324 
s - in org.apache.phoenix.end2end.join.SubqueryIT
[INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.334 s 
- in org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
[INFO] Running org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.201 s 
- in org.apache.phoenix.replication.SystemCatalogWALEntryFilterIT
[INFO] Running org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.332 s 
- in org.apache.phoenix.trace.PhoenixTableMetricsWriterIT
[INFO] Running org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 18.744 s 
- in org.apache.phoenix.tx.FlappingTransactionIT
[INFO] Running org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Running org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.432 s 
- in org.apache.phoenix.iterate.RoundRobinResultIteratorIT
[INFO] Running org.apache.phoenix.tx.TransactionIT
[INFO] Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.247 s 
- in org.apache.phoenix.rpc.UpdateCacheIT
[INFO] Running org.apache.phoenix.tx.TxCheckpointIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 103.594 
s - in org.apache.phoenix.trace.PhoenixTracingEndToEndIT
[INFO] Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 98.679 s 
- in org.apache.phoenix.tx.TransactionIT
[INFO] Running org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 641.843 
s - in org.apache.phoenix.end2end.join.SortMergeJoinGlobalIndexIT
[INFO] Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 283.657 
s - in org.apache.phoenix.end2end.join.SubqueryUsingSortMergeJoinIT
[INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.947 s 
- in org.apache.phoenix.util.IndexScrutinyIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 825.084 
s - in org.apache.phoenix.end2end.join.HashJoinLocalIndexIT
[INFO] Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 827.735 
s - in org.apache.phoenix.end2end.join.SortMergeJoinLocalIndexIT
[WARNING] Tests run: 52, Failures: 0, Errors: 0, Skipped: 4, Time elapsed: 
372.775 s - in org.apache.phoenix.tx.ParameterizedTransactionIT
[INFO] Tests run: 40, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 436.01 
s - in org.apache.phoenix.tx.TxCheckpointIT
[INFO] 
[INFO] Results:
[INFO] 
[ERROR] Failures: 
[ERROR]   ConcurrentMutationsIT.testConcurrentDeletesAndUpsertValues:214 
Expected to find PK in data table: (0,0)
[ERROR]   DefaultColumnValueIT.testDefaultIndexed:978
[ERROR]   RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex:1584
[ERROR]   
IndexMetadataIT.testMutableTableOnlyHasPrimaryKe

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #25

2018-01-31 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Archiving artifacts
ERROR: Build step failed with exception
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at 
hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1749)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 
hudson.tasks.ArtifactArchiver$ListFiles.invoke(ArtifactArchiver.java:298)
at 
hudson.tasks.ArtifactArchiver$ListFiles.invoke(ArtifactArchiver.java:278)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2760)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   

Build failed in Jenkins: Phoenix-4.x-HBase-1.3 #24

2018-01-31 Thread Apache Jenkins Server
See 

--
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Retrying after 10 seconds
java.io.IOException: Failed to mkdirs: 

at hudson.FilePath.mkdirs(FilePath.java:1170)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1200)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
Archiving artifacts
ERROR: Build step failed with exception
Also:   hudson.remoting.Channel$CallSiteStackTrace: Remote call to H27
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at hudson.FilePath.act(FilePath.java:986)
at hudson.FilePath.act(FilePath.java:975)
at 
hudson.tasks.ArtifactArchiver.perform(ArtifactArchiver.java:243)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:81)
at 
hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:744)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:690)
at hudson.model.Build$BuildExecution.post2(Build.java:186)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:635)
at hudson.model.Run.execute(Run.java:1749)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 
hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:421)
 does not exist.
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:483)
at 
org.apache.tools.ant.types.AbstractFileSet.getDirectoryScanner(AbstractFileSet.java:460)
at 
hudson.tasks.ArtifactArchiver$ListFiles.invoke(ArtifactArchiver.java:298)
at 
hudson.tasks.ArtifactArchiver$ListFiles.invoke(ArtifactArchiver.java:278)
at hudson.FilePath$FileCallableWrapper.call(FilePath.java:2760)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(Intercepting

[3/3] phoenix git commit: PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)

2018-01-31 Thread vincentpoon
PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9a2d4747
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9a2d4747
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9a2d4747

Branch: refs/heads/4.x-HBase-1.2
Commit: 9a2d4747d90704580241ae7177a4ff45a1e6a2d8
Parents: 0db17ed
Author: Vincent Poon 
Authored: Wed Jan 31 16:33:01 2018 -0800
Committer: Vincent Poon 
Committed: Wed Jan 31 16:35:12 2018 -0800

--
 .../end2end/index/PartialIndexRebuilderIT.java  |  3 +--
 .../index/exception/IndexWriteException.java| 21 +++-
 .../MultiIndexWriteFailureException.java| 14 +++--
 .../SingleIndexWriteFailureException.java   | 15 +++---
 .../index/PhoenixIndexFailurePolicy.java| 18 +
 5 files changed, 55 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9a2d4747/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index dd986aa..3961d32 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -1098,8 +1098,7 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.createStatement().execute("DELETE FROM " + 
fullTableName);
 fail();
 } catch (SQLException e) {
-// Expected
-
assertEquals(SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode(), 
e.getErrorCode());
+// expected
 }
 assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, null));
 } finally {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9a2d4747/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
index 531baa6..5dc6f60 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
@@ -22,6 +22,8 @@ import java.util.regex.Pattern;
 import org.apache.hadoop.hbase.HBaseIOException;
 import org.apache.phoenix.query.QueryServicesOptions;
 
+import com.google.common.base.Objects;
+
 /**
  * Generic {@link Exception} that an index write has failed
  */
@@ -33,7 +35,7 @@ public class IndexWriteException extends HBaseIOException {
  * server side.
  */
 private static final String DISABLE_INDEX_ON_FAILURE_MSG = 
"disableIndexOnFailure=";
-private boolean disableIndexOnFailure;
+private boolean disableIndexOnFailure = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_DISABLE_INDEX;
 
   public IndexWriteException() {
 super();
@@ -49,19 +51,15 @@ public class IndexWriteException extends HBaseIOException {
   super(message, cause);
   }
 
-  public IndexWriteException(String message, Throwable cause, boolean 
disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure), cause);
+  public IndexWriteException(Throwable cause, boolean disableIndexOnFailure) {
+super(cause);
+this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  public IndexWriteException(String message, boolean disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure));
+  public IndexWriteException(boolean disableIndexOnFailure) {
 this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  private static String prependDisableIndexMsg(String message, boolean 
disableIndexOnFailure) {
-return DISABLE_INDEX_ON_FAILURE_MSG + disableIndexOnFailure + " " + 
message;
-}
-
 public IndexWriteException(Throwable cause) {
 super(cause);
   }
@@ -81,4 +79,9 @@ public IndexWriteException(Throwable cause) {
 public boolean isDisableIndexOnFailure() {
 return disableIndexOnFailure;
 }
+
+@Override
+public String getMessage() {
+return Objects.firstNonNull(super.getMessage(), "") + " " + 
DISABLE_INDEX_ON_FAILURE_MSG + disableIndexOnFailure + ",";
+}
 }
\ No newline at end of file

h

[1/3] phoenix git commit: PHOENIX-4130 Avoid server retries for mutable indexes

2018-01-31 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 afe21dc72 -> 9a2d4747d


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0db17ed9/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
index cd23dc5..bc2b625 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
@@ -119,6 +119,25 @@ public class ServerUtil {
 }
 return new PhoenixIOException(t);
 }
+
+/**
+ * Return the first SQLException in the exception chain, otherwise parse 
it.
+ * When we're receiving an exception locally, there's no need to string 
parse,
+ * as the SQLException will already be part of the chain.
+ * @param t
+ * @return the SQLException, or null if none found
+ */
+public static SQLException parseLocalOrRemoteServerException(Throwable t) {
+while (t.getCause() != null) {
+if (t instanceof NotServingRegionException) {
+return parseRemoteException(new 
StaleRegionBoundaryCacheException());
+} else if (t instanceof SQLException) {
+return (SQLException) t;
+}
+t = t.getCause();
+}
+return parseRemoteException(t);
+}
 
 public static SQLException parseServerExceptionOrNull(Throwable t) {
 while (t.getCause() != null) {
@@ -196,7 +215,7 @@ public class ServerUtil {
 return parseTimestampFromRemoteException(t);
 }
 
-private static long parseTimestampFromRemoteException(Throwable t) {
+public static long parseTimestampFromRemoteException(Throwable t) {
 String message = t.getLocalizedMessage();
 if (message != null) {
 // If the message matches the standard pattern, recover the 
SQLException and throw it.
@@ -216,7 +235,7 @@ public class ServerUtil {
 msg = "";
 }
 if (t instanceof SQLException) {
-msg = constructSQLErrorMessage((SQLException) t, msg);
+msg = t.getMessage() + " " + msg;
 }
 msg += String.format(FORMAT_FOR_TIMESTAMP, timestamp);
 return new DoNotRetryIOException(msg, t);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0db17ed9/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
index b0e3780..918c411 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
@@ -105,6 +105,10 @@ public class TestIndexWriter {
 Configuration conf =new Configuration();
 Mockito.when(e.getConfiguration()).thenReturn(conf);
 Mockito.when(e.getSharedData()).thenReturn(new 
ConcurrentHashMap());
+Region mockRegion = Mockito.mock(Region.class);
+Mockito.when(e.getRegion()).thenReturn(mockRegion);
+HTableDescriptor mockTableDesc = Mockito.mock(HTableDescriptor.class);
+Mockito.when(mockRegion.getTableDesc()).thenReturn(mockTableDesc);
 ExecutorService exec = Executors.newFixedThreadPool(1);
 Map tables = new 
HashMap();
 FakeTableFactory factory = new FakeTableFactory(tables);
@@ -161,6 +165,10 @@ public class TestIndexWriter {
 Configuration conf =new Configuration();
 Mockito.when(e.getConfiguration()).thenReturn(conf);
 Mockito.when(e.getSharedData()).thenReturn(new 
ConcurrentHashMap());
+Region mockRegion = Mockito.mock(Region.class);
+Mockito.when(e.getRegion()).thenReturn(mockRegion);
+HTableDescriptor mockTableDesc = Mockito.mock(HTableDescriptor.class);
+Mockito.when(mockRegion.getTableDesc()).thenReturn(mockTableDesc);
 FakeTableFactory factory = new FakeTableFactory(tables);
 
 byte[] tableName = this.testName.getTableName();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0db17ed9/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
index 3e2b47c..bfe1d0d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
@@ -30,11 +30,13 @@ import org.ap

[2/3] phoenix git commit: PHOENIX-4130 Avoid server retries for mutable indexes

2018-01-31 Thread vincentpoon
PHOENIX-4130 Avoid server retries for mutable indexes


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0db17ed9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0db17ed9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0db17ed9

Branch: refs/heads/4.x-HBase-1.2
Commit: 0db17ed9f7b27d69c296de7bdfe427bacfb5d9e0
Parents: afe21dc
Author: Vincent Poon 
Authored: Mon Jan 29 15:06:12 2018 -0800
Committer: Vincent Poon 
Committed: Wed Jan 31 16:34:57 2018 -0800

--
 .../end2end/index/MutableIndexFailureIT.java|  12 +-
 .../end2end/index/PartialIndexRebuilderIT.java  |  76 ++--
 .../coprocessor/MetaDataEndpointImpl.java   |  53 --
 .../phoenix/coprocessor/MetaDataProtocol.java   |   6 +-
 .../coprocessor/MetaDataRegionObserver.java |  19 +-
 .../UngroupedAggregateRegionObserver.java   |  82 ++--
 .../phoenix/exception/SQLExceptionCode.java |   1 +
 .../apache/phoenix/execute/MutationState.java   |  39 +++-
 .../org/apache/phoenix/hbase/index/Indexer.java |  10 -
 .../index/exception/IndexWriteException.java|  49 -
 .../MultiIndexWriteFailureException.java|  29 ++-
 .../SingleIndexWriteFailureException.java   |  23 ++-
 .../hbase/index/write/IndexWriterUtils.java |  14 +-
 .../write/ParallelWriterIndexCommitter.java |   5 +-
 .../TrackingParallelWriterIndexCommitter.java   |   5 +-
 .../index/PhoenixIndexFailurePolicy.java| 189 +--
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   1 +
 .../apache/phoenix/optimize/QueryOptimizer.java |  29 ++-
 .../org/apache/phoenix/query/QueryServices.java |   2 +
 .../phoenix/query/QueryServicesOptions.java |   1 +
 .../org/apache/phoenix/schema/PIndexState.java  |   7 +-
 .../org/apache/phoenix/util/KeyValueUtil.java   |  12 ++
 .../org/apache/phoenix/util/ServerUtil.java |  23 ++-
 .../hbase/index/write/TestIndexWriter.java  |   8 +
 .../index/write/TestParalleIndexWriter.java |   6 +
 .../write/TestParalleWriterIndexCommitter.java  |   6 +
 26 files changed, 591 insertions(+), 116 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0db17ed9/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index 0318925..c2e0cb6 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -29,7 +29,6 @@ import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
-import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
@@ -130,7 +129,6 @@ public class MutableIndexFailureIT extends BaseTest {
 public static void doSetup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(10);
 serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
-serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_RETRIES_NUMBER, "2");
 serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
@@ -144,7 +142,8 @@ public class MutableIndexFailureIT extends BaseTest {
  * because we want to control it's execution ourselves
  */
 serverProps.put(QueryServices.INDEX_REBUILD_TASK_INITIAL_DELAY, 
Long.toString(Long.MAX_VALUE));
-Map clientProps = 
Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
+Map clientProps = Maps.newHashMapWithExpectedSize(2);
+clientProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
 NUM_SLAVES_BASE = 4;
 setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
 indexRebuildTaskRegionEnvironment =
@@ -161,7 +160,8 @@ public class MutableIndexFailureIT extends BaseTest {
 @Parameters(name = 
"MutableIndexFailureIT_transactional={0},localIndex={1},isNamespaceMapped={2},disableIndexOnWriteFailure={3},failRebuildTask={4},throwIndexWriteFailure={5}")
 // name is used by failsafe as file name in reports
 public static List data() {
 return Arrays.asList(new Object[][] { 
-{ false, false, false, true, false, false},
+// note - can't disableIndexOnWriteFailure without 
throwIndexWriteFailure, PH

phoenix git commit: PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)

2018-01-31 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 63582459d -> 88593be8d


PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/88593be8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/88593be8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/88593be8

Branch: refs/heads/4.x-HBase-1.3
Commit: 88593be8dd8db69664d5087f04ec99c33a956c28
Parents: 6358245
Author: Vincent Poon 
Authored: Wed Jan 31 16:33:01 2018 -0800
Committer: Vincent Poon 
Committed: Wed Jan 31 16:34:09 2018 -0800

--
 .../end2end/index/PartialIndexRebuilderIT.java  |  3 +--
 .../index/exception/IndexWriteException.java| 21 +++-
 .../MultiIndexWriteFailureException.java| 14 +++--
 .../SingleIndexWriteFailureException.java   | 15 +++---
 .../index/PhoenixIndexFailurePolicy.java| 18 +
 5 files changed, 55 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/88593be8/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index dd986aa..3961d32 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -1098,8 +1098,7 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.createStatement().execute("DELETE FROM " + 
fullTableName);
 fail();
 } catch (SQLException e) {
-// Expected
-
assertEquals(SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode(), 
e.getErrorCode());
+// expected
 }
 assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, null));
 } finally {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/88593be8/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
index 531baa6..5dc6f60 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
@@ -22,6 +22,8 @@ import java.util.regex.Pattern;
 import org.apache.hadoop.hbase.HBaseIOException;
 import org.apache.phoenix.query.QueryServicesOptions;
 
+import com.google.common.base.Objects;
+
 /**
  * Generic {@link Exception} that an index write has failed
  */
@@ -33,7 +35,7 @@ public class IndexWriteException extends HBaseIOException {
  * server side.
  */
 private static final String DISABLE_INDEX_ON_FAILURE_MSG = 
"disableIndexOnFailure=";
-private boolean disableIndexOnFailure;
+private boolean disableIndexOnFailure = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_DISABLE_INDEX;
 
   public IndexWriteException() {
 super();
@@ -49,19 +51,15 @@ public class IndexWriteException extends HBaseIOException {
   super(message, cause);
   }
 
-  public IndexWriteException(String message, Throwable cause, boolean 
disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure), cause);
+  public IndexWriteException(Throwable cause, boolean disableIndexOnFailure) {
+super(cause);
+this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  public IndexWriteException(String message, boolean disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure));
+  public IndexWriteException(boolean disableIndexOnFailure) {
 this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  private static String prependDisableIndexMsg(String message, boolean 
disableIndexOnFailure) {
-return DISABLE_INDEX_ON_FAILURE_MSG + disableIndexOnFailure + " " + 
message;
-}
-
 public IndexWriteException(Throwable cause) {
 super(cause);
   }
@@ -81,4 +79,9 @@ public IndexWriteException(Throwable cause) {
 public boolean isDisableIndexOnFailure() {
 return disableIndexOnFailure;
 }
+
+@Override
+public String getMessage() {
+return Objects.firstNonNull(super.getMessage(), "") + " " + 
DISABLE_IN

phoenix git commit: PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)

2018-01-31 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/master 2cf96db6e -> 0ef77b18c


PHOENIX-4130 Avoid server retries for mutable indexes (Addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0ef77b18
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0ef77b18
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0ef77b18

Branch: refs/heads/master
Commit: 0ef77b18cc200afe097ab803338af33997e3935d
Parents: 2cf96db
Author: Vincent Poon 
Authored: Wed Jan 31 16:33:01 2018 -0800
Committer: Vincent Poon 
Committed: Wed Jan 31 16:33:12 2018 -0800

--
 .../end2end/index/PartialIndexRebuilderIT.java  |  3 +--
 .../index/exception/IndexWriteException.java| 21 +++-
 .../MultiIndexWriteFailureException.java| 14 +++--
 .../SingleIndexWriteFailureException.java   | 15 +++---
 .../index/PhoenixIndexFailurePolicy.java| 18 +
 5 files changed, 55 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ef77b18/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index dd986aa..3961d32 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
@@ -1098,8 +1098,7 @@ public class PartialIndexRebuilderIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.createStatement().execute("DELETE FROM " + 
fullTableName);
 fail();
 } catch (SQLException e) {
-// Expected
-
assertEquals(SQLExceptionCode.INDEX_WRITE_FAILURE.getErrorCode(), 
e.getErrorCode());
+// expected
 }
 assertTrue(TestUtil.checkIndexState(conn, fullIndexName, 
PIndexState.DISABLE, null));
 } finally {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/0ef77b18/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
index 531baa6..5dc6f60 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/exception/IndexWriteException.java
@@ -22,6 +22,8 @@ import java.util.regex.Pattern;
 import org.apache.hadoop.hbase.HBaseIOException;
 import org.apache.phoenix.query.QueryServicesOptions;
 
+import com.google.common.base.Objects;
+
 /**
  * Generic {@link Exception} that an index write has failed
  */
@@ -33,7 +35,7 @@ public class IndexWriteException extends HBaseIOException {
  * server side.
  */
 private static final String DISABLE_INDEX_ON_FAILURE_MSG = 
"disableIndexOnFailure=";
-private boolean disableIndexOnFailure;
+private boolean disableIndexOnFailure = 
QueryServicesOptions.DEFAULT_INDEX_FAILURE_DISABLE_INDEX;
 
   public IndexWriteException() {
 super();
@@ -49,19 +51,15 @@ public class IndexWriteException extends HBaseIOException {
   super(message, cause);
   }
 
-  public IndexWriteException(String message, Throwable cause, boolean 
disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure), cause);
+  public IndexWriteException(Throwable cause, boolean disableIndexOnFailure) {
+super(cause);
+this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  public IndexWriteException(String message, boolean disableIndexOnFailure) {
-super(prependDisableIndexMsg(message, disableIndexOnFailure));
+  public IndexWriteException(boolean disableIndexOnFailure) {
 this.disableIndexOnFailure = disableIndexOnFailure;
   }
 
-  private static String prependDisableIndexMsg(String message, boolean 
disableIndexOnFailure) {
-return DISABLE_INDEX_ON_FAILURE_MSG + disableIndexOnFailure + " " + 
message;
-}
-
 public IndexWriteException(Throwable cause) {
 super(cause);
   }
@@ -81,4 +79,9 @@ public IndexWriteException(Throwable cause) {
 public boolean isDisableIndexOnFailure() {
 return disableIndexOnFailure;
 }
+
+@Override
+public String getMessage() {
+return Objects.firstNonNull(super.getMessage(), "") + " " + 
DISABLE_INDEX_ON_FAILURE

[01/35] phoenix git commit: PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2018-01-31 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-cdh5.11.2 519cca954 -> 9994059a0


PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of 
getExplainPlan() and pull optimize() out of getExplainPlan()


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1229b1eb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1229b1eb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1229b1eb

Branch: refs/heads/4.x-cdh5.11.2
Commit: 1229b1eb5a74a00d6edc8191bcb075156e8fd4ce
Parents: 4412856
Author: maryannxue 
Authored: Thu Dec 21 18:31:04 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |  2 +-
 .../apache/phoenix/execute/BaseQueryPlan.java   | 45 ++
 .../apache/phoenix/execute/HashJoinPlan.java| 59 +-
 .../phoenix/execute/SortMergeJoinPlan.java  | 63 ++--
 .../org/apache/phoenix/execute/UnionPlan.java   | 53 
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  9 ++-
 6 files changed, 119 insertions(+), 112 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1229b1eb/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
index 49efa97..f13510b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
@@ -298,7 +298,7 @@ public class ExplainPlanWithStatsEnabledIT extends 
ParallelStatsEnabledIT {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.setAutoCommit(false);
 Estimate info = getByteRowEstimates(conn, sql, binds);
-assertEquals((Long) 200l, info.estimatedBytes);
+assertEquals((Long) 176l, info.estimatedBytes);
 assertEquals((Long) 2l, info.estimatedRows);
 assertTrue(info.estimateInfoTs > 0);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1229b1eb/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
index 31f67b7..380037f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
@@ -117,7 +117,7 @@ public abstract class BaseQueryPlan implements QueryPlan {
 protected Long estimatedRows;
 protected Long estimatedSize;
 protected Long estimateInfoTimestamp;
-private boolean explainPlanCalled;
+private boolean getEstimatesCalled;
 
 
 protected BaseQueryPlan(
@@ -498,32 +498,17 @@ public abstract class BaseQueryPlan implements QueryPlan {
 
 @Override
 public ExplainPlan getExplainPlan() throws SQLException {
-explainPlanCalled = true;
 if (context.getScanRanges() == ScanRanges.NOTHING) {
 return new ExplainPlan(Collections.singletonList("DEGENERATE SCAN 
OVER " + getTableRef().getTable().getName().getString()));
 }
 
-// If cost-based optimizer is enabled, we need to initialize a dummy 
iterator to
-// get the stats for computing costs.
-boolean costBased =
-
context.getConnection().getQueryServices().getConfiguration().getBoolean(
-QueryServices.COST_BASED_OPTIMIZER_ENABLED, 
QueryServicesOptions.DEFAULT_COST_BASED_OPTIMIZER_ENABLED);
-if (costBased) {
-ResultIterator iterator = iterator();
-iterator.close();
-}
-// Optimize here when getting explain plan, as queries don't get 
optimized until after compilation
-QueryPlan plan = 
context.getConnection().getQueryServices().getOptimizer().optimize(context.getStatement(),
 this);
-ExplainPlan exp = plan instanceof BaseQueryPlan ? new 
ExplainPlan(getPlanSteps(plan.iterator())) : plan.getExplainPlan();
-if (!costBased) { // do not override estimates if they are used for 
cost calculation.
-this.estimatedRows = plan.getEstimatedRowsToScan();
-this.estimatedSize = plan.getEstimatedBytesToScan();
-this.estimateInfoTimestamp = plan.getEstimateInfoTimestamp();
-}
-return exp;
+ResultIterator iterator = iterator();

[17/35] phoenix git commit: PHOENIX-4288 Indexes not used when ordering by primary key

2018-01-31 Thread pboado
PHOENIX-4288 Indexes not used when ordering by primary key


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d790c707
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d790c707
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d790c707

Branch: refs/heads/4.x-cdh5.11.2
Commit: d790c707550647728afd574e11787503fd0c231a
Parents: f94f4eb
Author: maryannxue 
Authored: Sun Nov 5 02:37:55 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/CostBasedDecisionIT.java| 466 +++
 .../apache/phoenix/end2end/MutationStateIT.java |  17 +
 .../phoenix/compile/ListJarsQueryPlan.java  |   6 +
 .../org/apache/phoenix/compile/QueryPlan.java   |   5 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |   6 +
 .../apache/phoenix/execute/AggregatePlan.java   |  30 +-
 .../apache/phoenix/execute/BaseQueryPlan.java   |  21 +-
 .../phoenix/execute/ClientAggregatePlan.java|  28 ++
 .../apache/phoenix/execute/ClientScanPlan.java  |  25 +
 .../apache/phoenix/execute/CorrelatePlan.java   |  25 +
 .../phoenix/execute/DelegateQueryPlan.java  |   6 +
 .../apache/phoenix/execute/HashJoinPlan.java|  29 ++
 .../execute/LiteralResultIterationPlan.java |   6 +
 .../org/apache/phoenix/execute/ScanPlan.java|  25 +
 .../phoenix/execute/SortMergeJoinPlan.java  |  18 +
 .../org/apache/phoenix/execute/UnionPlan.java   |  10 +
 .../apache/phoenix/jdbc/PhoenixStatement.java   |   6 +
 .../java/org/apache/phoenix/optimize/Cost.java  | 123 +
 .../apache/phoenix/optimize/QueryOptimizer.java |  30 +-
 .../org/apache/phoenix/query/QueryServices.java |   3 +
 .../phoenix/query/QueryServicesOptions.java |   4 +
 .../java/org/apache/phoenix/util/CostUtil.java  |  90 
 .../query/ParallelIteratorsSplitTest.java   |   6 +
 23 files changed, 971 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d790c707/phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java
new file mode 100644
index 000..a3584ce
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CostBasedDecisionIT.java
@@ -0,0 +1,466 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.apache.phoenix.util.QueryUtil;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import com.google.common.collect.Maps;
+
+public class CostBasedDecisionIT extends BaseUniqueNamesOwnClusterIT {
+
+@BeforeClass
+public static void doSetup() throws Exception {
+Map props = Maps.newHashMapWithExpectedSize(1);
+props.put(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB, 
Long.toString(20));
+props.put(QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB, Long.toString(5));
+props.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(true));
+props.put(QueryServices.COST_BASED_OPTIMIZER_ENABLED, 
Boolean.toString(true));
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+}
+
+@Test
+public void testCostOverridesStaticPlanOrdering1() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCom

[20/35] phoenix git commit: PHOENIX-4414 Exception while using database metadata commands on tenant specific connection

2018-01-31 Thread pboado
PHOENIX-4414 Exception while using database metadata commands on tenant 
specific connection


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ffee8c0e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ffee8c0e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ffee8c0e

Branch: refs/heads/4.x-cdh5.11.2
Commit: ffee8c0e3359105da7cbfcd93e5e6291005a558b
Parents: 17d0329
Author: Mujtaba 
Authored: Tue Jan 9 22:50:21 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../end2end/QueryDatabaseMetaDataIT.java| 27 
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |  2 +-
 2 files changed, 28 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ffee8c0e/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
index bb54fd4..ea83b41 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/QueryDatabaseMetaDataIT.java
@@ -66,6 +66,7 @@ import org.apache.phoenix.schema.types.PChar;
 import org.apache.phoenix.schema.types.PDecimal;
 import org.apache.phoenix.schema.types.PInteger;
 import org.apache.phoenix.schema.types.PLong;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
@@ -106,6 +107,32 @@ public class QueryDatabaseMetaDataIT extends 
ParallelStatsDisabledIT {
 }
 
 @Test
+public void testMetadataTenantSpecific() throws SQLException {
+   // create multi-tenant table
+   String tableName = generateUniqueName();
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+   String baseTableDdl = "CREATE TABLE %s (K1 VARCHAR NOT NULL, K2 
VARCHAR NOT NULL, V VARCHAR CONSTRAINT PK PRIMARY KEY(K1, K2)) 
MULTI_TENANT=true";
+   conn.createStatement().execute(String.format(baseTableDdl, 
tableName));
+}
+   
+// create tenant specific view and execute metdata data call with 
tenant specific connection
+String tenantId = generateUniqueName();
+Properties tenantProps = new Properties();
+tenantProps.setProperty(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId);
+try (Connection tenantConn = DriverManager.getConnection(getUrl(), 
tenantProps)) {
+   String viewName = generateUniqueName();
+   String viewDdl = "CREATE VIEW %s AS SELECT * FROM %s";
+   tenantConn.createStatement().execute(String.format(viewDdl, 
viewName, tableName));
+   DatabaseMetaData dbmd = tenantConn.getMetaData();
+   ResultSet rs = dbmd.getTables(tenantId, "", viewName, null);
+assertTrue(rs.next());
+assertEquals(rs.getString("TABLE_NAME"), viewName);
+assertEquals(PTableType.VIEW.toString(), 
rs.getString("TABLE_TYPE"));
+assertFalse(rs.next());
+}
+}
+
+@Test
 public void testTableMetadataScan() throws SQLException {
 String tableAName = generateUniqueName() + "TABLE";
 String tableASchema = "";

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ffee8c0e/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index c34d20d..23330d8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -445,7 +445,7 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 appendConjunction(buf);
 buf.append(" TENANT_ID LIKE '" + 
StringUtil.escapeStringConstant(tenantIdPattern) + "' ");
 if (tenantId != null) {
-buf.append(" and TENANT_ID + = '" + 
StringUtil.escapeStringConstant(tenantId.getString()) + "' ");
+buf.append(" and TENANT_ID = '" + 
StringUtil.escapeStringConstant(tenantId.getString()) + "' ");
 }
 }
 }



[24/35] phoenix git commit: PHOENIX-4198 Remove the need for users to have access to the Phoenix SYSTEM tables to create tables

2018-01-31 Thread pboado
PHOENIX-4198 Remove the need for users to have access to the Phoenix SYSTEM 
tables to create tables


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8468f802
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8468f802
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8468f802

Branch: refs/heads/4.x-cdh5.11.2
Commit: 8468f802a20b8a9082d7d1d9a9dd454cbbe2bc20
Parents: 7296e51
Author: Ankit Singhal 
Authored: Thu Nov 9 02:37:55 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/TableDDLPermissionsIT.java  | 692 +++
 .../org/apache/hadoop/hbase/ipc/RpcUtil.java|  32 +
 .../BaseMetaDataEndpointObserver.java   | 111 +++
 .../coprocessor/MetaDataEndpointImpl.java   | 339 +++--
 .../coprocessor/MetaDataEndpointObserver.java   |  68 ++
 .../coprocessor/MetaDataRegionObserver.java |  17 +-
 .../coprocessor/PhoenixAccessController.java| 628 +
 .../PhoenixMetaDataCoprocessorHost.java | 236 +++
 .../index/PhoenixIndexFailurePolicy.java| 109 +--
 .../query/ConnectionQueryServicesImpl.java  |  15 +-
 .../org/apache/phoenix/query/QueryServices.java |   4 +
 .../phoenix/query/QueryServicesOptions.java |  14 +-
 .../phoenix/schema/stats/StatisticsWriter.java  |  42 +-
 .../org/apache/phoenix/util/MetaDataUtil.java   |  18 +
 .../org/apache/phoenix/util/SchemaUtil.java |  12 +
 15 files changed, 2196 insertions(+), 141 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8468f802/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
new file mode 100644
index 000..971383b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
@@ -0,0 +1,692 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.io.IOException;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.Set;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.security.AccessDeniedException;
+import org.apache.hadoop.hbase.security.access.AccessControlClient;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.MetaDataUtil;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.junit.runners.Parameterized.Parameters;
+
+import com.google.common.collect.Maps;
+
+/**
+ * Test that verifies a user can read Phoeni

[14/35] phoenix git commit: PHOENIX-4510 Fix performance.py issue in not finding tests jar (Artem Ervits)

2018-01-31 Thread pboado
PHOENIX-4510 Fix performance.py issue in not finding tests jar (Artem Ervits)

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3cc1ad19
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3cc1ad19
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3cc1ad19

Branch: refs/heads/4.x-cdh5.11.2
Commit: 3cc1ad19aae893465432a8cbfe26f8022e7e2c32
Parents: d8e5f95
Author: Josh Elser 
Authored: Wed Jan 3 17:22:28 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 bin/phoenix_utils.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3cc1ad19/bin/phoenix_utils.py
--
diff --git a/bin/phoenix_utils.py b/bin/phoenix_utils.py
index 580a78b..b521277 100755
--- a/bin/phoenix_utils.py
+++ b/bin/phoenix_utils.py
@@ -151,7 +151,7 @@ def setPath():
 global testjar
 testjar = find(PHOENIX_TESTS_JAR_PATTERN, phoenix_test_jar_path)
 if testjar == "":
-testjar = findFileInPathWithoutRecursion(PHOENIX_TESTS_JAR_PATTERN, 
os.path.join(current_dir, ".."))
+testjar = findFileInPathWithoutRecursion(PHOENIX_TESTS_JAR_PATTERN, 
os.path.join(current_dir, "..", 'lib'))
 if testjar == "":
 testjar = find(PHOENIX_TESTS_JAR_PATTERN, phoenix_class_path)
 



[02/35] phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-31 Thread pboado
PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/44128569
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/44128569
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/44128569

Branch: refs/heads/4.x-cdh5.11.2
Commit: 4412856981684e3220f630911f9b8b0c6be8f8c7
Parents: bf65518
Author: Karan Mehta 
Authored: Wed Jan 24 00:07:24 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/44128569/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 2871809..9f16cc1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

[15/35] phoenix git commit: PHOENIX-672 Add GRANT and REVOKE commands using HBase AccessController

2018-01-31 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/f94f4eb1/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
index 971383b..8666bb8 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableDDLPermissionsIT.java
@@ -16,144 +16,53 @@
  */
 package org.apache.phoenix.end2end;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertTrue;
-import static org.junit.Assert.fail;
-
-import java.io.IOException;
-import java.lang.reflect.UndeclaredThrowableException;
 import java.security.PrivilegedExceptionAction;
 import java.sql.Connection;
-import java.sql.DriverManager;
-import java.sql.PreparedStatement;
-import java.sql.ResultSet;
-import java.sql.SQLException;
-import java.sql.Statement;
-import java.util.Arrays;
-import java.util.Collection;
 import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Properties;
-import java.util.Set;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.AuthUtil;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
-import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.security.AccessDeniedException;
 import org.apache.hadoop.hbase.security.access.AccessControlClient;
 import org.apache.hadoop.hbase.security.access.Permission.Action;
-import org.apache.hadoop.hbase.util.Bytes;
-import org.apache.hadoop.security.UserGroupInformation;
-import org.apache.phoenix.exception.PhoenixIOException;
-import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.util.MetaDataUtil;
 import org.apache.phoenix.util.SchemaUtil;
-import org.junit.After;
-import org.junit.BeforeClass;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
-import org.junit.runner.RunWith;
-import org.junit.runners.Parameterized;
-import org.junit.runners.Parameterized.Parameters;
-
-import com.google.common.collect.Maps;
 
 /**
  * Test that verifies a user can read Phoenix tables with a minimal set of 
permissions.
  */
 @Category(NeedsOwnMiniClusterTest.class)
-@RunWith(Parameterized.class)
-public class TableDDLPermissionsIT{
-private static String SUPERUSER;
-
-private static HBaseTestingUtility testUtil;
-
-private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
-"SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
-"SYSTEM.MUTEX"));
-// PHOENIX- SYSTEM.MUTEX isn't being created in the SYSTEM namespace 
as it should be.
-private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
-Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM.MUTEX"));
-private static final String GROUP_SYSTEM_ACCESS = "group_system_access";
-final UserGroupInformation superUser = 
UserGroupInformation.createUserForTesting(SUPERUSER, new String[0]);
-final UserGroupInformation superUser2 = 
UserGroupInformation.createUserForTesting("superuser", new String[0]);
-final UserGroupInformation regularUser = 
UserGroupInformation.createUserForTesting("user",  new String[0]);
-final UserGroupInformation groupUser = 
UserGroupInformation.createUserForTesting("user2", new String[] { 
GROUP_SYSTEM_ACCESS });
-final UserGroupInformation unprivilegedUser = 
UserGroupInformation.createUserForTesting("unprivilegedUser",
-new String[0]);
-
+public class TableDDLPermissionsIT extends BasePermissionsIT {
 
-private static final int NUM_RECORDS = 5;
-
-private boolean isNamespaceMapped;
-
-public TableDDLPermissionsIT(final boolean isNamespaceMapped) throws 
Exception {
-this.isNamespaceMapped = isNamespaceMapped;
-Map clientProps = Maps.newHashMapWithExpectedSize(1);
-clientProps.put(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, "true");
+public TableDDLPermissionsIT(boolean isNamespaceMapped) throws Exception {
+super(isNamespaceMapped);
 }
 
-private void startNewMiniCluster(Configuration overrideConf) throws 
Exception{
-if (null != testUtil) {
-testUtil.shutdownMiniCluster();
-testUtil = null;
-}
-testUtil = new HBaseTestingUtility();
-
-Configuration config = testUtil.getConfiguration();
-
-config.set("hbase.coprocessor.master.classes",
-"org.apache.hadoop.hb

[33/35] phoenix git commit: PHOENIX-4542 Use .sha256 and .sha512

2018-01-31 Thread pboado
PHOENIX-4542 Use .sha256 and .sha512


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/76df368f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/76df368f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/76df368f

Branch: refs/heads/4.x-cdh5.11.2
Commit: 76df368fa64be922faaa8731127e5332e0bdd527
Parents: 80f195f
Author: Josh Elser 
Authored: Fri Jan 19 16:36:43 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:49 2018 +

--
 dev/make_rc.sh | 8 
 1 file changed, 4 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/76df368f/dev/make_rc.sh
--
diff --git a/dev/make_rc.sh b/dev/make_rc.sh
index 687b23d..8b6063f 100755
--- a/dev/make_rc.sh
+++ b/dev/make_rc.sh
@@ -119,14 +119,14 @@ function_sign() {
   if [[ "$OSTYPE" == "darwin"* ]]; then
 gpg2 --armor --output $file.asc --detach-sig $file;
 openssl md5 $file > $file.md5;
-openssl dgst -sha512 $file > $file.sha;
-openssl dgst -sha256 $file >> $file.sha;
+openssl dgst -sha512 $file > $file.sha512;
+openssl dgst -sha256 $file >> $file.sha256;
   # all other OS
   else
 gpg --armor --output $file.asc --detach-sig $file;
 md5sum -b $file > $file.md5;
-sha512sum -b $file > $file.sha;
-sha256sum -b $file >> $file.sha;
+sha512sum -b $file > $file.sha512;
+sha256sum -b $file >> $file.sha256;
   fi
 }
 



[29/35] phoenix git commit: PHOENIX-4541 Fix apache-rat-check failures

2018-01-31 Thread pboado
PHOENIX-4541 Fix apache-rat-check failures


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c2d921cc
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c2d921cc
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c2d921cc

Branch: refs/heads/4.x-cdh5.11.2
Commit: c2d921ccd396b977204d295c30bdeadd25f1f69c
Parents: 6a85b11
Author: Josh Elser 
Authored: Fri Jan 19 16:01:21 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/schema/TablesNotInSyncException.java   | 17 +
 pom.xml|  3 +++
 2 files changed, 20 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c2d921cc/phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
index e58df71..dac5b7f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/TablesNotInSyncException.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.schema;
 
 import org.apache.phoenix.exception.SQLExceptionCode;

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c2d921cc/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 961f0e5..fd1c4cf 100644
--- a/pom.xml
+++ b/pom.xml
@@ -500,6 +500,9 @@
 examples/pig/testdata
 
 **/patchprocess/**
+
+bin/argparse-1.4.0/argparse.py
   
 
   



[31/35] phoenix git commit: PHOENIX-3837 Feature enabling to set property on an index with Alter statement

2018-01-31 Thread pboado
PHOENIX-3837 Feature enabling to set property on an index with Alter statement

Signed-off-by: aertoria 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cc445628
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cc445628
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cc445628

Branch: refs/heads/4.x-cdh5.11.2
Commit: cc44562890709defea6820e7b0cb4f24fc9df393
Parents: 310b38c
Author: aertoria 
Authored: Mon Nov 27 03:13:53 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../apache/phoenix/end2end/AlterTableIT.java|   2 +-
 .../phoenix/end2end/index/IndexMetadataIT.java  |  55 ++
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |   5 +-
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  10 +-
 .../phoenix/parse/AddColumnStatement.java   |   2 +-
 .../phoenix/parse/AlterIndexStatement.java  |  14 +
 .../apache/phoenix/parse/ParseNodeFactory.java  |   6 +-
 .../phoenix/query/ConnectionQueryServices.java  |   2 +
 .../query/ConnectionQueryServicesImpl.java  |  20 +
 .../query/ConnectionlessQueryServicesImpl.java  |   7 +
 .../query/DelegateConnectionQueryServices.java  |   8 +-
 .../apache/phoenix/schema/MetaDataClient.java   | 566 +--
 12 files changed, 520 insertions(+), 177 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cc445628/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 5265b09..17f08c4 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -1080,7 +1080,7 @@ public class AlterTableIT extends ParallelStatsDisabledIT 
{
} catch (SQLException e) {

assertEquals(SQLExceptionCode.CANNOT_CREATE_TXN_TABLE_IF_TXNS_DISABLED.getErrorCode(),
 e.getErrorCode());
}
-   // altering a table to be transactional  should fail if 
transactions are disabled
+   // altering a table to be transactional should fail if 
transactions are disabled
conn.createStatement().execute("CREATE TABLE " + 
dataTableFullName + "(k INTEGER PRIMARY KEY, v VARCHAR)");
try {
conn.createStatement().execute("ALTER TABLE " + 
dataTableFullName + " SET TRANSACTIONAL=true");

http://git-wip-us.apache.org/repos/asf/phoenix/blob/cc445628/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
index 0ce36dd..986c317 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/IndexMetadataIT.java
@@ -674,4 +674,59 @@ public class IndexMetadataIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+
+
+@Test
+public void testIndexAlterPhoenixProperty() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String testTable = generateUniqueName();
+
+
+String ddl = "create table " + testTable  + " (k varchar primary key, 
v1 varchar)";
+Statement stmt = conn.createStatement();
+stmt.execute(ddl);
+String indexName = "IDX_" + generateUniqueName();
+
+ddl = "CREATE INDEX " + indexName + " ON " + testTable  + " (v1) ";
+stmt.execute(ddl);
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 10");
+
+ResultSet rs = conn.createStatement().executeQuery(
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+assertEquals(10,rs.getInt(1));
+
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 20");
+rs = conn.createStatement().executeQuery(
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+assertEquals(20,rs.getInt(1));
+}
+
+
+@Test
+public void testIndexAlterHBaseProperty() throws Exception {
+Pro

[08/35] phoenix git commit: PHOENIX-4449 Bundle a copy of Argparse-1.4.0 for installations that need it

2018-01-31 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/bee4fbcf/bin/argparse-1.4.0/argparse.py
--
diff --git a/bin/argparse-1.4.0/argparse.py b/bin/argparse-1.4.0/argparse.py
new file mode 100644
index 000..70a77cc
--- /dev/null
+++ b/bin/argparse-1.4.0/argparse.py
@@ -0,0 +1,2392 @@
+# Author: Steven J. Bethard .
+# Maintainer: Thomas Waldmann 
+
+"""Command-line parsing library
+
+This module is an optparse-inspired command-line parsing library that:
+
+- handles both optional and positional arguments
+- produces highly informative usage messages
+- supports parsers that dispatch to sub-parsers
+
+The following is a simple usage example that sums integers from the
+command-line and writes the result to a file::
+
+parser = argparse.ArgumentParser(
+description='sum the integers at the command line')
+parser.add_argument(
+'integers', metavar='int', nargs='+', type=int,
+help='an integer to be summed')
+parser.add_argument(
+'--log', default=sys.stdout, type=argparse.FileType('w'),
+help='the file where the sum should be written')
+args = parser.parse_args()
+args.log.write('%s' % sum(args.integers))
+args.log.close()
+
+The module contains the following public classes:
+
+- ArgumentParser -- The main entry point for command-line parsing. As the
+example above shows, the add_argument() method is used to populate
+the parser with actions for optional and positional arguments. Then
+the parse_args() method is invoked to convert the args at the
+command-line into an object with attributes.
+
+- ArgumentError -- The exception raised by ArgumentParser objects when
+there are errors with the parser's actions. Errors raised while
+parsing the command-line are caught by ArgumentParser and emitted
+as command-line messages.
+
+- FileType -- A factory for defining types of files to be created. As the
+example above shows, instances of FileType are typically passed as
+the type= argument of add_argument() calls.
+
+- Action -- The base class for parser actions. Typically actions are
+selected by passing strings like 'store_true' or 'append_const' to
+the action= argument of add_argument(). However, for greater
+customization of ArgumentParser actions, subclasses of Action may
+be defined and passed as the action= argument.
+
+- HelpFormatter, RawDescriptionHelpFormatter, RawTextHelpFormatter,
+ArgumentDefaultsHelpFormatter -- Formatter classes which
+may be passed as the formatter_class= argument to the
+ArgumentParser constructor. HelpFormatter is the default,
+RawDescriptionHelpFormatter and RawTextHelpFormatter tell the parser
+not to change the formatting for help text, and
+ArgumentDefaultsHelpFormatter adds information about argument defaults
+to the help.
+
+All other classes in this module are considered implementation details.
+(Also note that HelpFormatter and RawDescriptionHelpFormatter are only
+considered public as object names -- the API of the formatter objects is
+still considered an implementation detail.)
+"""
+
+__version__ = '1.4.0'  # we use our own version number independant of the
+   # one in stdlib and we release this on pypi.
+
+__external_lib__ = True  # to make sure the tests really test THIS lib,
+ # not the builtin one in Python stdlib
+
+__all__ = [
+'ArgumentParser',
+'ArgumentError',
+'ArgumentTypeError',
+'FileType',
+'HelpFormatter',
+'ArgumentDefaultsHelpFormatter',
+'RawDescriptionHelpFormatter',
+'RawTextHelpFormatter',
+'Namespace',
+'Action',
+'ONE_OR_MORE',
+'OPTIONAL',
+'PARSER',
+'REMAINDER',
+'SUPPRESS',
+'ZERO_OR_MORE',
+]
+
+
+import copy as _copy
+import os as _os
+import re as _re
+import sys as _sys
+import textwrap as _textwrap
+
+from gettext import gettext as _
+
+try:
+set
+except NameError:
+# for python < 2.4 compatibility (sets module is there since 2.3):
+from sets import Set as set
+
+try:
+basestring
+except NameError:
+basestring = str
+
+try:
+sorted
+except NameError:
+# for python < 2.4 compatibility:
+def sorted(iterable, reverse=False):
+result = list(iterable)
+result.sort()
+if reverse:
+result.reverse()
+return result
+
+
+def _callable(obj):
+return hasattr(obj, '__call__') or hasattr(obj, '__bases__')
+
+
+SUPPRESS = '==SUPPRESS=='
+
+OPTIONAL = '?'
+ZERO_OR_MORE = '*'
+ONE_OR_MORE = '+'
+PARSER = 'A...'
+REMAINDER = '...'
+_UNRECOGNIZED_ARGS_ATTR = '_unrecognized_args'
+
+# =
+# Utility functions and classes
+# =
+
+class _AttributeHolder(object):
+"""Abstract base class that provides

[23/35] phoenix git commit: PHOENIX-4198 Remove the need for users to have access to the Phoenix SYSTEM tables to create tables

2018-01-31 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/8468f802/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
new file mode 100644
index 000..8437b37
--- /dev/null
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixAccessController.java
@@ -0,0 +1,628 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import java.io.IOException;
+import java.net.InetAddress;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.CoprocessorEnvironment;
+import org.apache.hadoop.hbase.DoNotRetryIOException;
+import org.apache.hadoop.hbase.HColumnDescriptor;
+import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceDescriptor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.ClusterConnection;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.coprocessor.BaseMasterAndRegionObserver;
+import org.apache.hadoop.hbase.coprocessor.MasterCoprocessorEnvironment;
+import org.apache.hadoop.hbase.coprocessor.ObserverContext;
+import org.apache.hadoop.hbase.ipc.PayloadCarryingRpcController;
+import org.apache.hadoop.hbase.ipc.RpcServer;
+import org.apache.hadoop.hbase.protobuf.ProtobufUtil;
+import org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.AccessControlService;
+import org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost;
+import org.apache.hadoop.hbase.security.AccessDeniedException;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.UserProvider;
+import org.apache.hadoop.hbase.security.access.AccessControlClient;
+import org.apache.hadoop.hbase.security.access.AuthResult;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.hadoop.hbase.security.access.Permission.Action;
+import org.apache.hadoop.hbase.security.access.UserPermission;
+import org.apache.hadoop.hbase.util.Bytes;
+import 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.PIndexState;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableType;
+import org.apache.phoenix.util.MetaDataUtil;
+
+import com.google.common.collect.Lists;
+import com.google.protobuf.RpcCallback;
+
+public class PhoenixAccessController extends BaseMetaDataEndpointObserver {
+
+private PhoenixMetaDataControllerEnvironment env;
+private ArrayList accessControllers;
+private boolean accessCheckEnabled;
+private UserProvider userProvider;
+private boolean isAutomaticGrantEnabled;
+private boolean isStrictMode;
+public static final Log LOG = 
LogFactory.getLog(PhoenixAccessController.class);
+private static final Log AUDITLOG =
+
LogFactory.getLog("SecurityLogger."+PhoenixAccessController.class.getName());
+
+private List getAccessControllers() throws 
IOException {
+if (accessControllers == null) {
+synchronized (this) {
+if (accessControllers == null) {
+accessControllers = new 
ArrayList();
+RegionCoprocessorHost cpHost = 
this.env.getCoprocessorHost();
+List coprocessors = cpHost
+
.findCoprocessors(BaseMasterAndRegionObserver.c

[34/35] phoenix git commit: Update 4.x poms to 4.14 snapshot

2018-01-31 Thread pboado
Update 4.x poms to 4.14 snapshot


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e5bfd0d2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e5bfd0d2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e5bfd0d2

Branch: refs/heads/4.x-cdh5.11.2
Commit: e5bfd0d27ef41d8ed614abb297dc0e90225879c7
Parents: 76df368
Author: Pedro Boado 
Authored: Thu Jan 25 01:37:29 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:25:32 2018 +

--
 phoenix-assembly/pom.xml   |  2 +-
 phoenix-client/pom.xml |  2 +-
 phoenix-core/pom.xml   |  2 +-
 phoenix-flume/pom.xml  |  2 +-
 phoenix-hive/pom.xml   |  2 +-
 phoenix-kafka/pom.xml  |  2 +-
 phoenix-load-balancer/pom.xml  |  2 +-
 phoenix-parcel/pom.xml |  2 +-
 phoenix-pherf/pom.xml  |  2 +-
 phoenix-pig/pom.xml|  2 +-
 phoenix-queryserver-client/pom.xml |  2 +-
 phoenix-queryserver/pom.xml|  2 +-
 phoenix-server/pom.xml |  2 +-
 phoenix-spark/pom.xml  |  2 +-
 phoenix-tracing-webapp/pom.xml |  2 +-
 pom.xml| 19 ++-
 16 files changed, 33 insertions(+), 16 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 27631c8..55a9a6e 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 1a738a2..2454de6 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 716fa6f..2cb4c81 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 39a4ccd..0883e5e 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index a57f2d6..809fbea 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-kafka/pom.xml
--
diff --git a/phoenix-kafka/pom.xml b/phoenix-kafka/pom.xml
index c904bfc..c2cb7db 100644
--- a/phoenix-kafka/pom.xml
+++ b/phoenix-kafka/pom.xml
@@ -26,7 +26,7 @@

org.apache.phoenix
phoenix
-   4.13.0-cdh5.11.2
+   4.14.0-cdh5.11.2-SNAPSHOT

phoenix-kafka
Phoenix - Kafka

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-load-balancer/pom.xml
--
diff --git a/phoenix-load-balancer/pom.xml b/phoenix-load-balancer/pom.xml
index 5bc5e7c..81e124a 100644
--- a/phoenix-load-balancer/pom.xml
+++ b/phoenix-load-balancer/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.13.0-cdh5.11.2
+4.14.0-cdh5.11.2-SNAPSHOT
   
   phoenix-load-balancer
   Phoenix Load Balancer

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e5bfd0d2/phoenix-parcel/pom.xml
--
diff --git a/phoenix-parcel/pom.xml b/phoenix-parcel/pom.xml
index 5498c0e..31b502f 100644
--- a/phoenix-par

[19/35] phoenix git commit: PHOENIX-4386 Calculate the estimatedSize of MutationState using Map> mutations (addendum)

2018-01-31 Thread pboado
PHOENIX-4386 Calculate the estimatedSize of MutationState using Map> mutations (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/310b38c5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/310b38c5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/310b38c5

Branch: refs/heads/4.x-cdh5.11.2
Commit: 310b38c5ab7d67b5852e20c021c7bb2508803b96
Parents: d5bc5ce
Author: Thomas D'Silva 
Authored: Tue Nov 21 03:13:53 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../apache/phoenix/execute/PartialCommitIT.java |   5 +-
 .../apache/phoenix/compile/DeleteCompiler.java  |  11 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   7 +-
 .../apache/phoenix/execute/MutationState.java   | 127 ---
 .../java/org/apache/phoenix/util/IndexUtil.java |   4 +-
 .../org/apache/phoenix/util/KeyValueUtil.java   |   5 +-
 6 files changed, 98 insertions(+), 61 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/310b38c5/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
index 10fd7f8..e5b57e3 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
@@ -33,7 +33,6 @@ import java.sql.SQLException;
 import java.sql.Statement;
 import java.util.Arrays;
 import java.util.Collection;
-import java.util.Collections;
 import java.util.Comparator;
 import java.util.List;
 import java.util.Map;
@@ -52,8 +51,8 @@ import 
org.apache.hadoop.hbase.coprocessor.SimpleRegionObserver;
 import org.apache.hadoop.hbase.regionserver.wal.WALEdit;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.end2end.BaseOwnClusterIT;
+import org.apache.phoenix.execute.MutationState.MultiRowMutationState;
 import org.apache.phoenix.hbase.index.Indexer;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.monitoring.GlobalMetric;
 import org.apache.phoenix.monitoring.MetricType;
@@ -285,7 +284,7 @@ public class PartialCommitIT extends BaseOwnClusterIT {
 private PhoenixConnection 
getConnectionWithTableOrderPreservingMutationState() throws SQLException {
 Connection con = driver.connect(url, new Properties());
 PhoenixConnection phxCon = new 
PhoenixConnection(con.unwrap(PhoenixConnection.class));
-final 
Map> mutations = 
Maps.newTreeMap(new TableRefComparator());
+final Map mutations = 
Maps.newTreeMap(new TableRefComparator());
 // passing a null mutation state forces the 
connection.newMutationState() to be used to create the MutationState
 return new PhoenixConnection(phxCon, null) {
 @Override

http://git-wip-us.apache.org/repos/asf/phoenix/blob/310b38c5/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index f9ca300..a06e2ca 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -26,7 +26,6 @@ import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Iterator;
 import java.util.List;
-import java.util.Map;
 import java.util.Set;
 
 import org.apache.hadoop.hbase.Cell;
@@ -43,6 +42,7 @@ import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.exception.SQLExceptionInfo;
 import org.apache.phoenix.execute.AggregatePlan;
 import org.apache.phoenix.execute.MutationState;
+import org.apache.phoenix.execute.MutationState.MultiRowMutationState;
 import org.apache.phoenix.execute.MutationState.RowMutationState;
 import org.apache.phoenix.filter.SkipScanFilter;
 import org.apache.phoenix.hbase.index.ValueGetter;
@@ -91,7 +91,6 @@ import org.apache.phoenix.util.ScanUtil;
 
 import com.google.common.base.Preconditions;
 import com.google.common.collect.Lists;
-import com.google.common.collect.Maps;
 import com.sun.istack.NotNull;
 
 public class DeleteCompiler {
@@ -121,14 +120,14 @@ public class DeleteCompiler {
 final int maxSize = 
services.getProps().getInt(QueryServices.MAX_MUTATION_SIZE_ATTRIB,QueryServicesOptions.DEFAULT_MAX_MUTATION_SIZE);
 final int maxSizeBytes = 
services.getProps().getInt(QueryServices.MAX_MUTAT

[21/35] phoenix git commit: PHOENIX-4466 Do not relocate hadoop code (addendum)

2018-01-31 Thread pboado
PHOENIX-4466 Do not relocate hadoop code (addendum)

Turns out relocating hadoop-common (most obviously) breaks
some security-related classes in hadoop-common around Kerberos logins.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/17d03292
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/17d03292
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/17d03292

Branch: refs/heads/4.x-cdh5.11.2
Commit: 17d03292ef3cca66461868d22529734c9f936ee2
Parents: 3cc1ad1
Author: Josh Elser 
Authored: Fri Jan 5 17:32:47 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 phoenix-queryserver-client/pom.xml | 4 
 1 file changed, 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/17d03292/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 7b32bf0..0e72280 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -133,10 +133,6 @@
   
 
 
-  org.apache.hadoop
-  
${shaded.package}.org.apache.hadoop
-
-
   org.apache.commons
   
${shaded.package}.org.apache.commons
   



[10/35] phoenix git commit: PHOENIX-4342 - Surface QueryPlan in MutationPlan

2018-01-31 Thread pboado
PHOENIX-4342 - Surface QueryPlan in MutationPlan


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/00f1ef8f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/00f1ef8f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/00f1ef8f

Branch: refs/heads/4.x-cdh5.11.2
Commit: 00f1ef8f137acbf0f8c402af8cd621aa6910fcd4
Parents: bee4fbc
Author: Geoffrey Jacoby 
Authored: Thu Nov 2 20:41:02 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/compile/BaseMutationPlan.java   |   5 +
 .../phoenix/compile/DelegateMutationPlan.java   |   5 +
 .../apache/phoenix/compile/DeleteCompiler.java  | 545 ---
 .../apache/phoenix/compile/MutationPlan.java|   5 +-
 .../apache/phoenix/compile/UpsertCompiler.java  | 675 +++
 .../apache/phoenix/jdbc/PhoenixStatement.java   |   9 +-
 6 files changed, 733 insertions(+), 511 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/00f1ef8f/phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java
index 0e45682..60eb59a 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/BaseMutationPlan.java
@@ -79,4 +79,9 @@ public abstract class BaseMutationPlan implements 
MutationPlan {
 return 0l;
 }
 
+@Override
+public QueryPlan getQueryPlan() {
+return null;
+}
+
 }
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00f1ef8f/phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
index 343ec32..90eef61 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DelegateMutationPlan.java
@@ -42,6 +42,11 @@ public class DelegateMutationPlan implements MutationPlan {
 }
 
 @Override
+public QueryPlan getQueryPlan() {
+return plan.getQueryPlan();
+}
+
+@Override
 public ParameterMetaData getParameterMetaData() {
 return plan.getParameterMetaData();
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/00f1ef8f/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index f038cda..8d9a5b6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -303,14 +303,16 @@ public class DeleteCompiler {
 return Collections.emptyList();
 }
 
-private class MultiDeleteMutationPlan implements MutationPlan {
+private class MultiRowDeleteMutationPlan implements MutationPlan {
 private final List plans;
 private final MutationPlan firstPlan;
-
-public MultiDeleteMutationPlan(@NotNull List plans) {
+private final QueryPlan dataPlan;
+
+public MultiRowDeleteMutationPlan(QueryPlan dataPlan, @NotNull 
List plans) {
 Preconditions.checkArgument(!plans.isEmpty());
 this.plans = plans;
 this.firstPlan = plans.get(0);
+this.dataPlan = dataPlan;
 }
 
 @Override
@@ -348,8 +350,8 @@ public class DeleteCompiler {
 return firstPlan.getSourceRefs();
 }
 
-   @Override
-   public Operation getOperation() {
+   @Override
+   public Operation getOperation() {
return operation;
}
 
@@ -401,6 +403,11 @@ public class DeleteCompiler {
 }
 return estInfoTimestamp;
 }
+
+@Override
+public QueryPlan getQueryPlan() {
+return dataPlan;
+}
 }
 
 public MutationPlan compile(DeleteStatement delete) throws SQLException {
@@ -548,69 +555,9 @@ public class DeleteCompiler {
 List mutationPlans = 
Lists.newArrayListWithExpectedSize(queryPlans.size());
 for (final QueryPlan plan : queryPlans) {
 fina

[25/35] phoenix git commit: PHOENIX-4531 Delete on a table with a global mutable index can issue client-side deletes against the index

2018-01-31 Thread pboado
PHOENIX-4531 Delete on a table with a global mutable index can issue 
client-side deletes against the index


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/26c284c5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/26c284c5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/26c284c5

Branch: refs/heads/4.x-cdh5.11.2
Commit: 26c284c5639bc69b2a5a4c551d41bc207737d0f9
Parents: c2d921c
Author: Vincent Poon 
Authored: Sat Jan 20 01:22:11 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/index/BaseIndexIT.java  | 20 ++
 .../end2end/index/PartialIndexRebuilderIT.java  | 48 -
 .../apache/phoenix/compile/DeleteCompiler.java  | 71 ++--
 .../apache/phoenix/optimize/QueryOptimizer.java | 13 ++--
 .../phoenix/compile/QueryOptimizerTest.java | 41 +++
 5 files changed, 168 insertions(+), 25 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/26c284c5/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java
index 049416c..b92da4a 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/BaseIndexIT.java
@@ -37,6 +37,8 @@ import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.sql.Statement;
+import java.util.Iterator;
+import java.util.List;
 import java.util.Properties;
 import java.util.Random;
 
@@ -51,6 +53,8 @@ import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.ipc.PhoenixRpcSchedulerFactory;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.compile.ColumnResolver;
 import org.apache.phoenix.compile.FromCompiler;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
@@ -68,6 +72,7 @@ import org.apache.phoenix.schema.PTableKey;
 import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.util.DateUtil;
 import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
@@ -202,6 +207,7 @@ public abstract class BaseIndexIT extends 
ParallelStatsDisabledIT {
 
 String dml = "DELETE from " + fullTableName + " WHERE long_col2 = 
4";
 assertEquals(1,conn.createStatement().executeUpdate(dml));
+assertNoClientSideIndexMutations(conn);
 conn.commit();
 
 String query = "SELECT /*+ NO_INDEX */ long_pk FROM " + 
fullTableName;
@@ -232,6 +238,19 @@ public abstract class BaseIndexIT extends 
ParallelStatsDisabledIT {
 }
 }
 
+private void assertNoClientSideIndexMutations(Connection conn) throws 
SQLException {
+if (mutable) {
+Iterator>> iterator = 
PhoenixRuntime.getUncommittedDataIterator(conn);
+if (iterator.hasNext()) {
+byte[] tableName = iterator.next().getFirst(); // skip data 
table mutations
+PTable table = PhoenixRuntime.getTable(conn, 
Bytes.toString(tableName));
+assertTrue(table.getType() == PTableType.TABLE); // should be 
data table
+boolean hasIndexData = iterator.hasNext();
+assertFalse(hasIndexData); // should have no index data
+}
+}
+}
+
 @Test
 public void testCreateIndexAfterUpsertStarted() throws Exception {
 testCreateIndexAfterUpsertStarted(false, 
@@ -367,6 +386,7 @@ public abstract class BaseIndexIT extends 
ParallelStatsDisabledIT {
 
 String dml = "DELETE from " + fullTableName + " WHERE long_col2 = 
4";
 assertEquals(1,conn.createStatement().executeUpdate(dml));
+assertNoClientSideIndexMutations(conn);
 conn.commit();
 
 // query the data table

http://git-wip-us.apache.org/repos/asf/phoenix/blob/26c284c5/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java
index a1da339..31649bd 100644
--- 
a/phoenix-core/src/it/java/or

[28/35] phoenix git commit: PHOENIX-3050 Handle DESC columns in child/parent join optimization

2018-01-31 Thread pboado
PHOENIX-3050 Handle DESC columns in child/parent join optimization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d5bc5ce2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d5bc5ce2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d5bc5ce2

Branch: refs/heads/4.x-cdh5.11.2
Commit: d5bc5ce2777486e00efa6237fa965843035ee324
Parents: 515f10d
Author: maryannxue 
Authored: Mon Nov 6 02:37:55 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/join/HashJoinMoreIT.java |  5 +
 .../org/apache/phoenix/compile/JoinCompiler.java | 19 +--
 .../apache/phoenix/compile/QueryCompiler.java|  6 +++---
 .../apache/phoenix/compile/WhereOptimizer.java   |  5 -
 4 files changed, 21 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d5bc5ce2/phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
index 37ffd02..f09f1d3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/join/HashJoinMoreIT.java
@@ -895,6 +895,11 @@ public class HashJoinMoreIT extends 
ParallelStatsDisabledIT {
 + "FROM ( SELECT ACCOUNT_ID, BUCKET_ID, OBJECT_ID, 
MAX(OBJECT_VERSION) AS MAXVER "
 + "   FROM test2961 GROUP BY ACCOUNT_ID, BUCKET_ID, 
OBJECT_ID) AS X "
 + "   INNER JOIN test2961 AS OBJ ON X.ACCOUNT_ID = 
OBJ.ACCOUNT_ID AND X.BUCKET_ID = OBJ.BUCKET_ID AND X.OBJECT_ID = OBJ.OBJECT_ID 
AND  X.MAXVER = OBJ.OBJECT_VERSION";
+rs = conn.createStatement().executeQuery("explain " + q);
+String plan = QueryUtil.getExplainPlan(rs);
+String dynamicFilter = "DYNAMIC SERVER FILTER BY (OBJ.ACCOUNT_ID, 
OBJ.BUCKET_ID, OBJ.OBJECT_ID, OBJ.OBJECT_VERSION) IN ((X.ACCOUNT_ID, 
X.BUCKET_ID, X.OBJECT_ID, X.MAXVER))";
+assertTrue("Expected '" + dynamicFilter + "' to be used for the 
query, but got:\n" + plan,
+plan.contains(dynamicFilter));
 rs = conn.createStatement().executeQuery(q);
 assertTrue(rs.next());
 assertEquals("", rs.getString(4));

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d5bc5ce2/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
index 887e2d2..439a79b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
@@ -110,6 +110,12 @@ import com.google.common.collect.Sets;
 
 public class JoinCompiler {
 
+public enum Strategy {
+HASH_BUILD_LEFT,
+HASH_BUILD_RIGHT,
+SORT_MERGE,
+}
+
 public enum ColumnRefType {
 JOINLOCAL,
 GENERAL,
@@ -489,7 +495,7 @@ public class JoinCompiler {
 return dependencies;
 }
 
-public Pair, List> 
compileJoinConditions(StatementContext lhsCtx, StatementContext rhsCtx, boolean 
sortExpressions) throws SQLException {
+public Pair, List> 
compileJoinConditions(StatementContext lhsCtx, StatementContext rhsCtx, 
Strategy strategy) throws SQLException {
 if (onConditions.isEmpty()) {
 return new Pair, List>(
 Collections. 
singletonList(LiteralExpression.newConstant(1)),
@@ -505,15 +511,16 @@ public class JoinCompiler {
 rhsCompiler.reset();
 Expression right = condition.getRHS().accept(rhsCompiler);
 PDataType toType = getCommonType(left.getDataType(), 
right.getDataType());
-if (left.getDataType() != toType || left.getSortOrder() == 
SortOrder.DESC) {
-left = CoerceExpression.create(left, toType, 
SortOrder.ASC, left.getMaxLength());
+SortOrder toSortOrder = strategy == Strategy.SORT_MERGE ? 
SortOrder.ASC : (strategy == Strategy.HASH_BUILD_LEFT ? right.getSortOrder() : 
left.getSortOrder());
+if (left.getDataType() != toType || left.getSortOrder() != 
toSortOrder) {
+left = CoerceExpression.create(left, toType, toSortOrder, 
left.getMaxLength());
 }
-if (right.getDataType() != toType || right.getSortOrder

[22/35] phoenix git commit: PHOENIX-4415 Ignore CURRENT_SCN property if set in Pig Storer

2018-01-31 Thread pboado
PHOENIX-4415 Ignore CURRENT_SCN property if set in Pig Storer


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2c4ca690
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2c4ca690
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2c4ca690

Branch: refs/heads/4.x-cdh5.11.2
Commit: 2c4ca6900ae1c4f43e293aa0096393356dd3bbfa
Parents: cc44562
Author: James Taylor 
Authored: Wed Nov 8 03:13:53 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/mapreduce/PhoenixOutputFormat.java  | 13 ++-
 .../phoenix/mapreduce/PhoenixRecordWriter.java  |  8 ++-
 .../phoenix/mapreduce/util/ConnectionUtil.java  | 23 
 .../org/apache/phoenix/util/PropertiesUtil.java |  9 +++-
 .../java/org/apache/phoenix/pig/BasePigIT.java  |  4 
 .../apache/phoenix/pig/PhoenixHBaseStorage.java | 12 ++
 6 files changed, 58 insertions(+), 11 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c4ca690/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixOutputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixOutputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixOutputFormat.java
index e55b977..4217e40 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixOutputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixOutputFormat.java
@@ -19,6 +19,8 @@ package org.apache.phoenix.mapreduce;
 
 import java.io.IOException;
 import java.sql.SQLException;
+import java.util.Collections;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -36,6 +38,15 @@ import org.apache.hadoop.mapreduce.lib.db.DBWritable;
  */
 public class PhoenixOutputFormat  extends 
OutputFormat {
 private static final Log LOG = 
LogFactory.getLog(PhoenixOutputFormat.class);
+private final Set propsToIgnore;
+
+public PhoenixOutputFormat() {
+this(Collections.emptySet());
+}
+
+public PhoenixOutputFormat(Set propsToIgnore) {
+this.propsToIgnore = propsToIgnore;
+}
 
 @Override
 public void checkOutputSpecs(JobContext jobContext) throws IOException, 
InterruptedException {  
@@ -52,7 +63,7 @@ public class PhoenixOutputFormat  
extends OutputFormat getRecordWriter(TaskAttemptContext 
context) throws IOException, InterruptedException {
 try {
-return new PhoenixRecordWriter(context.getConfiguration());
+return new PhoenixRecordWriter(context.getConfiguration(), 
propsToIgnore);
 } catch (SQLException e) {
 LOG.error("Error calling PhoenixRecordWriter "  + e.getMessage());
 throw new RuntimeException(e);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c4ca690/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordWriter.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordWriter.java
index 70ee3f5..52f2fe3 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordWriter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixRecordWriter.java
@@ -21,6 +21,8 @@ import java.io.IOException;
 import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
+import java.util.Collections;
+import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -46,7 +48,11 @@ public class PhoenixRecordWriter  
extends RecordWriteremptySet());
+}
+
+public PhoenixRecordWriter(final Configuration configuration, Set 
propsToIgnore) throws SQLException {
+this.conn = 
ConnectionUtil.getOutputConnectionWithoutTheseProps(configuration, 
propsToIgnore);
 this.batchSize = PhoenixConfigurationUtil.getBatchSize(configuration);
 final String upsertQuery = 
PhoenixConfigurationUtil.getUpsertStatement(configuration);
 this.statement = this.conn.prepareStatement(upsertQuery);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/2c4ca690/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/ConnectionUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/ConnectionUtil.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/ConnectionUtil.java
index ada3816..56a5ef5 100644
--- 
a/phoenix-core/src/main/java/org/apac

[18/35] phoenix git commit: PHOENIX-4322 DESC primary key column with variable length does not work in SkipScanFilter

2018-01-31 Thread pboado
PHOENIX-4322 DESC primary key column with variable length does not work in 
SkipScanFilter


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/515f10d1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/515f10d1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/515f10d1

Branch: refs/heads/4.x-cdh5.11.2
Commit: 515f10d12546dabd80fe60cd73d320041076e26c
Parents: d790c70
Author: maryannxue 
Authored: Sun Nov 5 02:37:55 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../it/java/org/apache/phoenix/end2end/SortOrderIT.java  | 11 ++-
 .../expression/RowValueConstructorExpression.java|  4 ++--
 2 files changed, 12 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/515f10d1/phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java
index 655dbb1..3f749c1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SortOrderIT.java
@@ -167,7 +167,16 @@ public class SortOrderIT extends ParallelStatsDisabledIT {
 runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
Object[][]{{"o2", 2}}, new WhereCondition("oid", "IN", "('o2')"),
 table);
 }
-
+
+@Test
+public void inDescCompositePK3() throws Exception {
+String table = generateUniqueName();
+String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, code 
VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
+Object[][] insertedRows = new Object[][]{{"o1", "1"}, {"o2", "2"}, 
{"o3", "3"}};
+runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
"(('o2', '2'), ('o1', '1'))"),
+table);
+}
+
 @Test
 public void likeDescCompositePK1() throws Exception {
 String table = generateUniqueName();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/515f10d1/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 15f6e3e..9bb7234 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -199,8 +199,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 // as otherwise we need it to ensure sort order is correct
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
-  && 
!getChildren().get(k).getDataType().isFixedWidth() 
-  && outputBytes[outputSize-1] == 
QueryConstants.SEPARATOR_BYTE ; k--) {
+  && 
!getChildren().get(k).getDataType().isFixedWidth()
+  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
 outputSize--;
 }
 ptr.set(outputBytes, 0, outputSize);



[03/35] phoenix git commit: PHOENIX-4551 Possible ColumnAlreadyExistsException is thrown from delete when autocommit off(Rajeshbabu)

2018-01-31 Thread pboado
PHOENIX-4551 Possible ColumnAlreadyExistsException is thrown from delete when 
autocommit off(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bf655187
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bf655187
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bf655187

Branch: refs/heads/4.x-cdh5.11.2
Commit: bf655187db159b03c61db516d7b55e25e8648012
Parents: 26c284c
Author: Rajeshbabu Chintaguntla 
Authored: Tue Jan 23 18:45:01 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../org/apache/phoenix/end2end/DeleteIT.java| 29 
 .../apache/phoenix/compile/DeleteCompiler.java  | 17 
 2 files changed, 40 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bf655187/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
index 9eac0af..e111e7a 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DeleteIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.end2end;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -734,6 +735,34 @@ public class DeleteIT extends ParallelStatsDisabledIT {
 }
 
 }
+
+@Test
+public void 
testClientSideDeleteShouldNotFailWhenSameColumnPresentInMultipleIndexes()
+throws Exception {
+String tableName = generateUniqueName();
+String indexName1 = generateUniqueName();
+String indexName2 = generateUniqueName();
+String ddl =
+"CREATE TABLE IF NOT EXISTS "
++ tableName
++ " (pk1 DECIMAL NOT NULL, v1 VARCHAR, v2 VARCHAR 
CONSTRAINT PK PRIMARY KEY (pk1))";
+String idx1 = "CREATE INDEX " + indexName1 + " ON " + tableName + 
"(v1)";
+String idx2 = "CREATE INDEX " + indexName2 + " ON " + tableName + 
"(v1, v2)";
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+conn.createStatement().execute(idx1);
+conn.createStatement().execute(idx2);
+Statement stmt = conn.createStatement();
+stmt.executeUpdate("UPSERT INTO " + tableName + " VALUES 
(1,'value', 'value2')");
+conn.commit();
+conn.setAutoCommit(false);
+try {
+conn.createStatement().execute("DELETE FROM " + tableName + " 
WHERE pk1 > 0");
+} catch (Exception e) {
+fail("Should not throw any exception");
+}
+}
+}
 }
 
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bf655187/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
index 7a880e9..fd80238 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/DeleteCompiler.java
@@ -25,6 +25,7 @@ import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Collections;
 import java.util.Iterator;
+import java.util.LinkedHashSet;
 import java.util.List;
 import java.util.Set;
 
@@ -466,7 +467,7 @@ public class DeleteCompiler {
 for (PTable index : immutableIndexes) {
 selectColumnCount += index.getPKColumns().size() - pkColumnCount;
 }
-List projectedColumns = 
Lists.newArrayListWithExpectedSize(selectColumnCount + pkColumnOffset);
+Set projectedColumns = new 
LinkedHashSet(selectColumnCount + pkColumnOffset);
 List aliasedNodes = 
Lists.newArrayListWithExpectedSize(selectColumnCount);
 for (int i = isSalted ? 1 : 0; i < pkColumnOffset; i++) {
 PColumn column = table.getPKColumns().get(i);
@@ -487,8 +488,10 @@ public class DeleteCompiler {
 String columnName = columnInfo.getSecond();
 boolean hasNoColumnFamilies = 
table.getColumnFamilies().isEmpty();
 PColumn column = hasNoColumnFamilies ? 
table.getColumnForColumnName(columnName) : 
table.getColumnFamily(familyName).getPColumnForColumnName(columnName);
-  

[11/35] phoenix git commit: PHOENIX-4361: Remove redundant argument in separateAndValidateProperties in CQSI

2018-01-31 Thread pboado
PHOENIX-4361: Remove redundant argument in separateAndValidateProperties in CQSI

Signed-off-by: aertoria 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8743e162
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8743e162
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8743e162

Branch: refs/heads/4.x-cdh5.11.2
Commit: 8743e16248b5654c5e9d65c416345709fa44b3d3
Parents: 00f1ef8
Author: Chinmay Kulkarni 
Authored: Thu Nov 16 02:31:20 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java   | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8743e162/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 35b85e4..a3a6c3a 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -1721,7 +1721,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 Set tableDescriptors = Collections.emptySet();
 Set origTableDescriptors = Collections.emptySet();
 boolean nonTxToTx = false;
-Pair tableDescriptorPair = 
separateAndValidateProperties(table, stmtProperties, 
colFamiliesForPColumnsToBeAdded, families, tableProps);
+Pair tableDescriptorPair = 
separateAndValidateProperties(table, stmtProperties, 
colFamiliesForPColumnsToBeAdded, tableProps);
 HTableDescriptor tableDescriptor = tableDescriptorPair.getSecond();
 HTableDescriptor origTableDescriptor = tableDescriptorPair.getFirst();
 if (tableDescriptor != null) {
@@ -1939,7 +1939,8 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 this.addCoprocessors(tableDescriptor.getName(), tableDescriptor, 
tableType, tableProps);
 }
 
-private Pair 
separateAndValidateProperties(PTable table, Map>> properties, Set colFamiliesForPColumnsToBeAdded, 
List>> families, Map 
tableProps) throws SQLException {
+private Pair 
separateAndValidateProperties(PTable table, Map>> properties,
+  Set colFamiliesForPColumnsToBeAdded, Map 
tableProps) throws SQLException {
 Map> stmtFamiliesPropsMap = new 
HashMap<>(properties.size());
 Map commonFamilyProps = new HashMap<>();
 boolean addingColumns = colFamiliesForPColumnsToBeAdded != null && 
!colFamiliesForPColumnsToBeAdded.isEmpty();



[04/35] phoenix git commit: PHOENIX-4509 Fix performance.py usage text (Artem Ervits)

2018-01-31 Thread pboado
PHOENIX-4509 Fix performance.py usage text (Artem Ervits)

Signed-off-by: Josh Elser 


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d8e5f959
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d8e5f959
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d8e5f959

Branch: refs/heads/4.x-cdh5.11.2
Commit: d8e5f959a90eb98b50e888a8eb9a2cebeab4a18b
Parents: 6e5a8f7
Author: Josh Elser 
Authored: Wed Jan 3 17:21:50 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 bin/performance.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d8e5f959/bin/performance.py
--
diff --git a/bin/performance.py b/bin/performance.py
index c16dd5a..f61ad20 100755
--- a/bin/performance.py
+++ b/bin/performance.py
@@ -35,9 +35,9 @@ def delfile(filename):
 os.remove(filename)
 
 def usage():
-print "Performance script arguments not specified. Usage: performance.sh \
+print "Performance script arguments not specified. Usage: performance.py \
  "
-print "Example: performance.sh localhost 10"
+print "Example: performance.py localhost 10"
 
 
 def createFileWithContent(filename, content):



[13/35] phoenix git commit: Revert "PHOENIX-4523 phoenix.schema.isNamespaceMappingEnabled problem (Karan Mehta)"

2018-01-31 Thread pboado
Revert "PHOENIX-4523 phoenix.schema.isNamespaceMappingEnabled problem (Karan 
Mehta)"

This reverts commit 4a3435a


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/7296e510
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/7296e510
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/7296e510

Branch: refs/heads/4.x-cdh5.11.2
Commit: 7296e5109ea8f7197f2047e070cd95bf36291b98
Parents: 760d459
Author: Pedro Boado 
Authored: Thu Jan 25 01:08:59 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../query/ConnectionQueryServicesImpl.java  | 32 +---
 .../org/apache/phoenix/util/UpgradeUtil.java|  2 --
 .../query/ConnectionQueryServicesImplTest.java  |  6 ++--
 3 files changed, 18 insertions(+), 22 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/7296e510/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index a3a6c3a..6d06087 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2524,15 +2524,15 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 }
 
-void createSysMutexTableIfNotExists(HBaseAdmin admin, ReadOnlyProps props) 
throws IOException, SQLException {
+void createSysMutexTable(HBaseAdmin admin, ReadOnlyProps props) throws 
IOException, SQLException {
 try {
-if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME) || 
admin.tableExists(TableName.valueOf(
-
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,PhoenixDatabaseMetaData.SYSTEM_MUTEX_TABLE_NAME)))
 {
+final TableName mutexTableName = SchemaUtil.getPhysicalTableName(
+PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, props);
+List systemTables = getSystemTableNames(admin);
+if (systemTables.contains(mutexTableName)) {
 logger.debug("System mutex table already appears to exist, not 
creating it");
 return;
 }
-final TableName mutexTableName = SchemaUtil.getPhysicalTableName(
-PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, props);
 HTableDescriptor tableDesc = new HTableDescriptor(mutexTableName);
 HColumnDescriptor columnDesc = new HColumnDescriptor(
 PhoenixDatabaseMetaData.SYSTEM_MUTEX_FAMILY_NAME_BYTES);
@@ -2548,17 +2548,10 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 } catch (TableExistsException e) {
 // Ignore
-} catch (IOException e) {
-
if(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
AccessDeniedException.class)) ||
-
!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
org.apache.hadoop.hbase.TableNotFoundException.class))) {
-// Ignore
-} else {
-throw e;
-}
 }
 }
 
-List getSystemTableNamesInDefaultNamespace(HBaseAdmin admin) 
throws IOException {
+List getSystemTableNames(HBaseAdmin admin) throws IOException {
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
@@ -2577,7 +2570,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
 // Catch the IOException to log the error message and then bubble it 
up for the client to retry.
 try {
-createSysMutexTableIfNotExists(hbaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+createSysMutexTable(hbaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
 } catch (IOException exception) {
 logger.error("Failed to created SYSMUTEX table. Upgrade or 
migration is not possible without it. Please retry.");
 throw exception;
@@ -2629,7 +2622,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
!SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM,
 
ConnectionQueryServicesImpl.this.getProps())) {
 try (HBaseAdmin admin = getAdmin()) {
-createSysMutexTableIfNotExists(admin, 
this.getProps());
+createSysMutexTable(admin, this.getProps());

[16/35] phoenix git commit: PHOENIX-672 Add GRANT and REVOKE commands using HBase AccessController

2018-01-31 Thread pboado
PHOENIX-672 Add GRANT and REVOKE commands using HBase AccessController


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f94f4eb1
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f94f4eb1
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f94f4eb1

Branch: refs/heads/4.x-cdh5.11.2
Commit: f94f4eb10dc3d59de49af689f442d8d53f19f76f
Parents: 8468f80
Author: Karan Mehta 
Authored: Wed Nov 29 02:37:55 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/BasePermissionsIT.java  | 754 +++
 .../phoenix/end2end/ChangePermissionsIT.java| 269 +++
 .../end2end/SystemTablePermissionsIT.java   | 233 +-
 .../phoenix/end2end/TableDDLPermissionsIT.java  | 583 ++
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |  30 +-
 .../coprocessor/PhoenixAccessController.java|  29 +-
 .../phoenix/exception/SQLExceptionCode.java |   1 +
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  40 +-
 .../phoenix/parse/ChangePermsStatement.java | 102 +++
 .../apache/phoenix/parse/ParseNodeFactory.java  |   7 +-
 .../query/ConnectionQueryServicesImpl.java  |  24 +-
 .../apache/phoenix/query/QueryConstants.java|   1 +
 .../org/apache/phoenix/query/QueryServices.java |   2 -
 .../phoenix/query/QueryServicesOptions.java |   8 +-
 .../apache/phoenix/schema/MetaDataClient.java   | 138 
 .../schema/TablesNotInSyncException.java|  22 +
 .../org/apache/phoenix/util/SchemaUtil.java |  25 +-
 .../apache/phoenix/parse/QueryParserTest.java   |  46 +-
 18 files changed, 1546 insertions(+), 768 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f94f4eb1/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
new file mode 100644
index 000..9d7ef1b
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -0,0 +1,754 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import com.google.common.base.Joiner;
+import com.google.common.base.Throwables;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.AuthUtil;
+import org.apache.hadoop.hbase.HBaseTestingUtility;
+import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.security.AccessDeniedException;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.hbase.security.access.AccessControlClient;
+import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
+import org.apache.phoenix.jdbc.PhoenixStatement;
+import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.QueryUtil;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import java.io.IOException;
+import java.lang.reflect.UndeclaredThrowableException;
+import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+import java.util.Set;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+impo

[07/35] phoenix git commit: PHOENIX-4449 Bundle a copy of Argparse-1.4.0 for installations that need it

2018-01-31 Thread pboado
http://git-wip-us.apache.org/repos/asf/phoenix/blob/bee4fbcf/bin/sqlline-thin.py
--
diff --git a/bin/sqlline-thin.py b/bin/sqlline-thin.py
index 47384d8..fecc96c 100755
--- a/bin/sqlline-thin.py
+++ b/bin/sqlline-thin.py
@@ -25,7 +25,14 @@ import sys
 import phoenix_utils
 import atexit
 import urlparse
-import argparse
+
+# import argparse
+try:
+import argparse
+except ImportError:
+current_dir = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.join(current_dir, 'argparse-1.4.0'))
+import argparse
 
 global childProc
 childProc = None

http://git-wip-us.apache.org/repos/asf/phoenix/blob/bee4fbcf/bin/sqlline.py
--
diff --git a/bin/sqlline.py b/bin/sqlline.py
index 7a724de..4a676ee 100755
--- a/bin/sqlline.py
+++ b/bin/sqlline.py
@@ -24,7 +24,14 @@ import subprocess
 import sys
 import phoenix_utils
 import atexit
-import argparse
+
+# import argparse
+try:
+import argparse
+except ImportError:
+current_dir = os.path.dirname(os.path.abspath(__file__))
+sys.path.append(os.path.join(current_dir, 'argparse-1.4.0'))
+import argparse
 
 global childProc
 childProc = None
@@ -39,8 +46,9 @@ atexit.register(kill_child)
 phoenix_utils.setPath()
 
 parser = argparse.ArgumentParser(description='Launches the Apache Phoenix 
Client.')
-# Positional argument 'zookeepers' is optional
-parser.add_argument('zookeepers', nargs='?', help='The ZooKeeper quorum 
string', default='localhost:2181:/hbase')
+# Positional argument 'zookeepers' is optional. The PhoenixDriver will 
automatically populate
+# this if it's not provided by the user (so, we want to leave a default value 
of empty)
+parser.add_argument('zookeepers', nargs='?', help='The ZooKeeper quorum 
string', default='')
 # Positional argument 'sqlfile' is optional
 parser.add_argument('sqlfile', nargs='?', help='A file of SQL commands to 
execute', default='')
 # Common arguments across sqlline.py and sqlline-thin.py



[27/35] phoenix git commit: PHOENIX-4523 phoenix.schema.isNamespaceMappingEnabled problem (Karan Mehta)

2018-01-31 Thread pboado
PHOENIX-4523 phoenix.schema.isNamespaceMappingEnabled problem (Karan Mehta)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/319ff011
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/319ff011
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/319ff011

Branch: refs/heads/4.x-cdh5.11.2
Commit: 319ff01175f3f65acf85314d5d137496c8f1a043
Parents: ffee8c0
Author: James Taylor 
Authored: Fri Jan 12 00:22:09 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../query/ConnectionQueryServicesImpl.java  | 35 ++--
 .../org/apache/phoenix/util/UpgradeUtil.java|  2 ++
 .../query/ConnectionQueryServicesImplTest.java  |  6 ++--
 3 files changed, 22 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/319ff011/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 38be6af..5b7735e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2543,16 +2543,15 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 }
 
-void createSysMutexTable(HBaseAdmin admin, ReadOnlyProps props) throws 
IOException, SQLException {
+void createSysMutexTableIfNotExists(HBaseAdmin admin, ReadOnlyProps props) 
throws IOException, SQLException {
 try {
-final TableName mutexTableName = SchemaUtil.getPhysicalTableName(
-PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, props);
-List systemTables = getSystemTableNames(admin);
-if (systemTables.contains(mutexTableName) || admin.tableExists( 
TableName.valueOf(
-
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,PhoenixDatabaseMetaData.SYSTEM_MUTEX_TABLE_NAME)))
 {
+if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME) || 
admin.tableExists(TableName.valueOf(
+
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,PhoenixDatabaseMetaData.SYSTEM_MUTEX_TABLE_NAME)))
 {
 logger.debug("System mutex table already appears to exist, not 
creating it");
 return;
 }
+final TableName mutexTableName = SchemaUtil.getPhysicalTableName(
+PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, props);
 HTableDescriptor tableDesc = new HTableDescriptor(mutexTableName);
 HColumnDescriptor columnDesc = new HColumnDescriptor(
 PhoenixDatabaseMetaData.SYSTEM_MUTEX_FAMILY_NAME_BYTES);
@@ -2566,8 +2565,13 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
put.add(PhoenixDatabaseMetaData.SYSTEM_MUTEX_FAMILY_NAME_BYTES, UPGRADE_MUTEX, 
UPGRADE_MUTEX_UNLOCKED);
 sysMutexTable.put(put);
 }
-} catch (TableExistsException | AccessDeniedException e) {
-// Ignore
+} catch (IOException e) {
+
if(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
AccessDeniedException.class)) ||
+
!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
org.apache.hadoop.hbase.TableNotFoundException.class))) {
+// Ignore
+} else {
+throw e;
+}
 }catch(PhoenixIOException e){
 if(e.getCause()!=null && e.getCause() instanceof 
AccessDeniedException)
 {
@@ -2578,7 +2582,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 }
 }
 
-List getSystemTableNames(HBaseAdmin admin) throws IOException {
+List getSystemTableNamesInDefaultNamespace(HBaseAdmin admin) 
throws IOException {
 return 
Lists.newArrayList(admin.listTableNames(QueryConstants.SYSTEM_SCHEMA_NAME + 
"\\..*"));
 }
 
@@ -2597,7 +2601,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 
 // Catch the IOException to log the error message and then bubble it 
up for the client to retry.
 try {
-createSysMutexTable(hbaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
+createSysMutexTableIfNotExists(hbaseAdmin, 
ConnectionQueryServicesImpl.this.getProps());
 } catch (IOException exception) {
 logger.error("Failed to created SYSMUTEX table. Upgrade or 
migration is not possible without it. Please retry.");

[06/35] phoenix git commit: PHOENIX-4446 Sequence table region opening failing because of property setting attempt on read-only configuration(Rajeshbabu)

2018-01-31 Thread pboado
PHOENIX-4446 Sequence table region opening failing because of property setting 
attempt on read-only configuration(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4ff394d7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4ff394d7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4ff394d7

Branch: refs/heads/4.x-cdh5.11.2
Commit: 4ff394d70aa0dbc58cb5290c47b39398fde891c1
Parents: 519cca9
Author: Rajeshbabu Chintaguntla 
Authored: Sat Dec 9 04:54:12 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../java/org/apache/phoenix/hbase/index/write/IndexWriter.java  | 3 ++-
 .../main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java | 5 -
 2 files changed, 2 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4ff394d7/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
index 6b57025..4e5e182 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/IndexWriter.java
@@ -33,6 +33,7 @@ import org.apache.hadoop.hbase.util.Pair;
 import org.apache.phoenix.hbase.index.exception.IndexWriteException;
 import org.apache.phoenix.hbase.index.table.HTableInterfaceReference;
 import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.PhoenixIndexFailurePolicy;
 
 import com.google.common.collect.ArrayListMultimap;
 import com.google.common.collect.Multimap;
@@ -88,7 +89,7 @@ public class IndexWriter implements Stoppable {
 Configuration conf = env.getConfiguration();
 try {
   IndexFailurePolicy committer =
-  conf.getClass(INDEX_FAILURE_POLICY_CONF_KEY, 
KillServerOnFailurePolicy.class,
+  conf.getClass(INDEX_FAILURE_POLICY_CONF_KEY, 
PhoenixIndexFailurePolicy.class,
 IndexFailurePolicy.class).newInstance();
   return committer;
 } catch (InstantiationException e) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4ff394d7/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
index 679c5df..8b1e2f1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilder.java
@@ -102,11 +102,6 @@ public class PhoenixIndexBuilder extends NonTxIndexBuilder 
{
 @Override
 public void setup(RegionCoprocessorEnvironment env) throws IOException {
 super.setup(env);
-Configuration conf = env.getConfiguration();
-// Install handler that will attempt to disable the index first before 
killing the region
-// server
-conf.setIfUnset(IndexWriter.INDEX_FAILURE_POLICY_CONF_KEY,
-PhoenixIndexFailurePolicy.class.getName());
 }
 
 @Override



[26/35] phoenix git commit: PHOENIX-4528 PhoenixAccessController checks permissions only at table level when creating views

2018-01-31 Thread pboado
PHOENIX-4528 PhoenixAccessController checks permissions only at table level 
when creating views


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6a85b11e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6a85b11e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6a85b11e

Branch: refs/heads/4.x-cdh5.11.2
Commit: 6a85b11edc90c37e0ffe053319fe6a86f8bb00d2
Parents: 319ff01
Author: Karan Mehta 
Authored: Sun Jan 14 01:19:22 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/BasePermissionsIT.java  |  4 +
 .../phoenix/end2end/ChangePermissionsIT.java| 26 +-
 .../coprocessor/PhoenixAccessController.java| 91 +---
 3 files changed, 88 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6a85b11e/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 9d7ef1b..d33d538 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -746,6 +746,10 @@ public class BasePermissionsIT extends BaseTest {
 }
 }
 
+String surroundWithDoubleQuotes(String input) {
+return "\"" + input + "\"";
+}
+
 void validateAccessDeniedException(AccessDeniedException ade) {
 String msg = ade.getMessage();
 assertTrue("Exception contained unexpected message: '" + msg + "'",

http://git-wip-us.apache.org/repos/asf/phoenix/blob/6a85b11e/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
index 2bf7fe1..a30f01f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
@@ -145,7 +145,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create new table. Create indexes, views and view indexes on top of 
it. Verify the contents by querying it
@@ -236,7 +236,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create MultiTenant Table (View Index Table should be automatically 
created)
@@ -267,4 +267,26 @@ public class ChangePermissionsIT extends BasePermissionsIT 
{
 verifyAllowed(readMultiTenantTableWithIndex(VIEW1_TABLE_NAME, "o1"), 
regularUser2);
 verifyAllowed(readMultiTenantTableWithoutIndex(VIEW2_TABLE_NAME, 
"o2"), regularUser2);
 }
+
+/**
+ * Grant RX permissions on the schema to regularUser1,
+ * Creating view on a table with that schema by regularUser1 should be 
allowed
+ */
+@Test
+public void testCreateViewOnTableWithRXPermsOnSchema() throws Exception {
+
+startNewMiniCluster();
+grantSystemTableAccess(superUser1, regularUser1, regularUser2, 
regularUser3);
+
+if(isNamespaceMapped) {
+verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, SCHEMA_NAME, 
true), superUser1);
+} else {
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true),

[09/35] phoenix git commit: PHOENIX-4449 Bundle a copy of Argparse-1.4.0 for installations that need it

2018-01-31 Thread pboado
PHOENIX-4449 Bundle a copy of Argparse-1.4.0 for installations that need it


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/bee4fbcf
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/bee4fbcf
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/bee4fbcf

Branch: refs/heads/4.x-cdh5.11.2
Commit: bee4fbcfd6250e1da33e63f2b37a7c1260c72c09
Parents: 6add797
Author: Josh Elser 
Authored: Tue Dec 12 00:18:25 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 NOTICE |2 +
 bin/argparse-1.4.0/argparse.py | 2392 +++
 bin/sqlline-thin.py|9 +-
 bin/sqlline.py |   14 +-
 4 files changed, 2413 insertions(+), 4 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/bee4fbcf/NOTICE
--
diff --git a/NOTICE b/NOTICE
index eb2eef5..8b2b647 100644
--- a/NOTICE
+++ b/NOTICE
@@ -19,3 +19,5 @@ The file bin/daemon.py is based on the file of the same name 
in python-daemon 2.
 # Copyright © 2003 Clark Evans
 # Copyright © 2002 Noah Spurrier
 # Copyright © 2001 Jürgen Hermann
+
+The file bin/argparse-1.4.0/argparse.py is (c) 2006-2009 Steven J. Bethard 
.



[32/35] phoenix git commit: PHOENIX-4488 Cache config parameters for MetaDataEndPointImpl during initialization

2018-01-31 Thread pboado
PHOENIX-4488 Cache config parameters for MetaDataEndPointImpl during 
initialization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/80f195f2
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/80f195f2
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/80f195f2

Branch: refs/heads/4.x-cdh5.11.2
Commit: 80f195f25d1d65913875bb7da8b1141e6f5fd6c2
Parents: 1229b1e
Author: James Taylor 
Authored: Fri Dec 22 19:36:44 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:49 2018 +

--
 .../coprocessor/MetaDataEndpointImplTest.java   | 44 
 .../coprocessor/MetaDataEndpointImpl.java   | 30 ++---
 2 files changed, 16 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/80f195f2/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
deleted file mode 100644
index 2c558d8..000
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * http://www.apache.org/licenses/LICENSE-2.0
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.coprocessor;
-
-import com.google.common.collect.Lists;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.schema.PTable;
-import org.apache.phoenix.schema.PTableType;
-import org.junit.Test;
-
-import java.util.List;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-public class MetaDataEndpointImplTest {
-
-@Test
-public void testExceededIndexQuota() throws Exception {
-PTable parentTable = mock(PTable.class);
-List indexes = Lists.newArrayList(mock(PTable.class), 
mock(PTable.class));
-when(parentTable.getIndexes()).thenReturn(indexes);
-Configuration configuration = new Configuration();
-assertFalse(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
-configuration.setInt(QueryServices.MAX_INDEXES_PER_TABLE, 1);
-assertTrue(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
-}
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/80f195f2/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index bf8ba39..47ad7cf 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -242,7 +242,6 @@ import org.apache.phoenix.util.UpgradeUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.annotations.VisibleForTesting;
 import com.google.common.cache.Cache;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
@@ -472,6 +471,10 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 
 private PhoenixMetaDataCoprocessorHost phoenixAccessCoprocessorHost;
 private boolean accessCheckEnabled;
+private boolean blockWriteRebuildIndex;
+private int maxIndexesPerTable;
+private boolean isTablesMappingEnabled;
+
 
 /**
  * Stores a reference to the coprocessor environment provided by the
@@ -492,8 +495,16 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 }
 
 phoenixAc

[35/35] phoenix git commit: PHOENIX-4560 ORDER BY with GROUP BY doesn't work if there is WHERE on pk column

2018-01-31 Thread pboado
PHOENIX-4560 ORDER BY with GROUP BY doesn't work if there is WHERE on pk column


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9994059a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9994059a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9994059a

Branch: refs/heads/4.x-cdh5.11.2
Commit: 9994059a049122415464aa329cdfa126ae493de3
Parents: e5bfd0d
Author: James Taylor 
Authored: Fri Jan 26 00:43:06 2018 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:26:11 2018 +

--
 .../org/apache/phoenix/end2end/OrderByIT.java   | 111 +++
 .../org/apache/phoenix/compile/ScanRanges.java  |   5 -
 .../phoenix/compile/QueryCompilerTest.java  |  15 +++
 3 files changed, 126 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9994059a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
--
diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index ebbeeb4..3bce9c7 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -39,6 +39,7 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
+import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -544,6 +545,116 @@ public class OrderByIT extends ParallelStatsDisabledIT {
 }
 
 @Test
+public void testAggregateOrderBy() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+Connection conn = DriverManager.getConnection(getUrl(), props);
+String tableName = generateUniqueName();
+String ddl = "create table " + tableName + " (ID VARCHAR NOT NULL 
PRIMARY KEY, VAL1 VARCHAR, VAL2 INTEGER)";
+conn.createStatement().execute(ddl);
+
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABC','aa123', 11)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABD','ba124', 1)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABE','cf125', 13)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABF','dan126', 4)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABG','elf127', 15)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('ABH','fan128', 6)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAA','get211', 100)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAB','hat212', 7)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAC','aap12', 2)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAD','ball12', 3)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAE','inn2110', 13)");
+conn.createStatement().execute("upsert into " + tableName + " values 
('AAF','key2112', 40)");
+conn.commit();
+
+ResultSet rs;
+PhoenixStatement stmt = 
conn.createStatement().unwrap(PhoenixStatement.class);
+rs = stmt.executeQuery("select distinct ID, VAL1, VAL2 from " + 
tableName + " where ID in ('ABC','ABD','ABE','ABF','ABG','ABH','AAA', 'AAB', 
'AAC','AAD','AAE','AAF') order by VAL1");
+
assertFalse(stmt.getQueryPlan().getOrderBy().getOrderByExpressions().isEmpty());
+assertTrue(rs.next());
+assertEquals("ABC", rs.getString(1));
+assertEquals("aa123", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("aap12", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("ba124", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("ball12", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("cf125", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("dan126", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("elf127", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("fan128", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("get211", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("hat212", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("inn2110", rs.getString(2));
+assertTrue(rs.next());
+assertEquals("AAF", rs.getString(1));
+assertEquals("key2112", rs.getString(2));
+assertFalse(rs.next())

[12/35] phoenix git commit: PHOENIX-4386 Calculate the estimatedSize of MutationState using Map> mutations

2018-01-31 Thread pboado
PHOENIX-4386 Calculate the estimatedSize of MutationState using Map> mutations


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/760d4590
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/760d4590
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/760d4590

Branch: refs/heads/4.x-cdh5.11.2
Commit: 760d4590f46edfb4c602a48ee1609f739c44e40b
Parents: 8743e16
Author: Thomas D'Silva 
Authored: Fri Nov 17 19:11:43 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../apache/phoenix/end2end/MutationStateIT.java | 144 +++
 .../org/apache/phoenix/end2end/QueryMoreIT.java |  42 --
 .../apache/phoenix/compile/DeleteCompiler.java  |   6 +-
 .../apache/phoenix/compile/UpsertCompiler.java  |   4 +-
 .../apache/phoenix/execute/MutationState.java   |  50 +--
 .../org/apache/phoenix/util/KeyValueUtil.java   |  51 ++-
 6 files changed, 201 insertions(+), 96 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/760d4590/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
new file mode 100644
index 000..2d5f360
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MutationStateIT.java
@@ -0,0 +1,144 @@
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.sql.Statement;
+import java.util.Properties;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.execute.MutationState;
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.query.QueryServices;
+import org.junit.Test;
+
+public class MutationStateIT extends ParallelStatsDisabledIT {
+
+private static final String DDL =
+" (ORGANIZATION_ID CHAR(15) NOT NULL, SCORE DOUBLE, "
++ "ENTITY_ID CHAR(15) NOT NULL, TAGS VARCHAR, CONSTRAINT 
PAGE_SNAPSHOT_PK "
++ "PRIMARY KEY (ORGANIZATION_ID, ENTITY_ID DESC)) 
MULTI_TENANT=TRUE";
+
+private void upsertRows(PhoenixConnection conn, String fullTableName) 
throws SQLException {
+PreparedStatement stmt =
+conn.prepareStatement("upsert into " + fullTableName
++ " (organization_id, entity_id, score) values 
(?,?,?)");
+for (int i = 0; i < 1; i++) {
+stmt.setString(1, "" + i);
+stmt.setString(2, "" + i);
+stmt.setInt(3, 1);
+stmt.execute();
+}
+}
+
+@Test
+public void testMaxMutationSize() throws Exception {
+Properties connectionProperties = new Properties();
+
connectionProperties.setProperty(QueryServices.MAX_MUTATION_SIZE_ATTRIB, "3");
+
connectionProperties.setProperty(QueryServices.MAX_MUTATION_SIZE_BYTES_ATTRIB, 
"100");
+PhoenixConnection connection =
+(PhoenixConnection) DriverManager.getConnection(getUrl(), 
connectionProperties);
+String fullTableName = generateUniqueName();
+try (Statement stmt = connection.createStatement()) {
+stmt.execute(
+"CREATE TABLE " + fullTableName + DDL);
+}
+try {
+upsertRows(connection, fullTableName);
+fail();
+} catch (SQLException e) {
+
assertEquals(SQLExceptionCode.MAX_MUTATION_SIZE_EXCEEDED.getErrorCode(),
+e.getErrorCode());
+}
+
+// set the max mutation size (bytes) to a low value
+
connectionProperties.setProperty(QueryServices.MAX_MUTATION_SIZE_ATTRIB, 
"1000");
+
connectionProperties.setProperty(QueryServices.MAX_MUTATION_SIZE_BYTES_ATTRIB, 
"4");
+connection =
+(PhoenixConnection) DriverManager.getConnection(getUrl(), 
connectionProperties);
+try {
+upsertRows(connection, fullTableName);
+fail();
+} catch (SQLException e) {
+
assertEquals(SQLExceptionCode.MAX_MUTATION_SIZE_BYTES_EXCEEDED.getErrorCode(),
+e.getErrorCode());
+}
+}
+
+@Test
+public void testMutationEstimatedSize() throws Exception {
+PhoenixConnection conn = (PhoenixConnection) 
DriverManager.getConnection(getUrl());
+conn.setAutoCommit(false);
+String fullTableName = generateUniqueName();
+try (Statement stmt = conn.createStatement()) {
+stmt.execute

[30/35] phoenix git commit: PHOENIX-4424 Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase Schema Names)

2018-01-31 Thread pboado
PHOENIX-4424 Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase 
Schema Names)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6e5a8f76
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6e5a8f76
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6e5a8f76

Branch: refs/heads/4.x-cdh5.11.2
Commit: 6e5a8f76e0171cb0a2eecdaf84267c7c62a54bad
Parents: 2c4ca69
Author: Karan Mehta 
Authored: Sat Nov 4 03:13:53 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../phoenix/end2end/ChangePermissionsIT.java|  5 +-
 .../apache/phoenix/end2end/CreateSchemaIT.java  | 64 ++--
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |  2 +-
 .../phoenix/parse/CreateSchemaStatement.java|  2 +-
 .../apache/phoenix/query/QueryConstants.java|  1 -
 .../apache/phoenix/schema/MetaDataClient.java   |  8 ++-
 .../org/apache/phoenix/util/SchemaUtil.java |  5 +-
 .../apache/phoenix/parse/QueryParserTest.java   | 13 
 8 files changed, 73 insertions(+), 27 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e5a8f76/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
index c023440..2bf7fe1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
@@ -23,6 +23,7 @@ import org.apache.hadoop.hbase.security.User;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
 import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.TableNotFoundException;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.Test;
 import org.junit.experimental.categories.Category;
 
@@ -144,7 +145,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
QueryConstants.HBASE_DEFAULT_SCHEMA_NAME + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
 }
 
 // Create new table. Create indexes, views and view indexes on top of 
it. Verify the contents by querying it
@@ -235,7 +236,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
QueryConstants.HBASE_DEFAULT_SCHEMA_NAME + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
 }
 
 // Create MultiTenant Table (View Index Table should be automatically 
created)

http://git-wip-us.apache.org/repos/asf/phoenix/blob/6e5a8f76/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
index fe09dcd..8002dc1 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
@@ -43,31 +43,61 @@ public class CreateSchemaIT extends ParallelStatsDisabledIT 
{
 Properties props = PropertiesUtil.deepCopy(TestUtil.TEST_PROPERTIES);
 props.setProperty(QueryServices.IS_NAMESPACE_MAPPING_ENABLED, 
Boolean.toString(true));
 String schemaName = generateUniqueName();
-String ddl = "CREATE SCHEMA " + schemaName;
+String schemaName1 = schemaName.toLowerCase();
+String schemaName2 = schemaName.toLowerCase();
+// Create unique name schema and verify that it exists
+// ddl1 should create lowercase schemaName since it is passed in with 
double-quotes
+// ddl2 should create uppercase schemaName since Phoenix upper-cases 
identifiers without quotes
+// Both the statements should succeed
+String ddl1 = "CREATE SCHEMA \"" + schemaName1 + "\"";
+String ddl2 = "CREATE SCHEMA " + schemaName2;
 try (Connection conn = Driver

[05/35] phoenix git commit: PHOENIX-4446 Sequence table region opening failing because of property setting attempt on read-only configuration-addendum(Rajeshbabu)

2018-01-31 Thread pboado
PHOENIX-4446 Sequence table region opening failing because of property setting 
attempt on read-only configuration-addendum(Rajeshbabu)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/6add7973
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/6add7973
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/6add7973

Branch: refs/heads/4.x-cdh5.11.2
Commit: 6add7973368107494725d6eb9f3bc43ea4674f58
Parents: 4ff394d
Author: Rajeshbabu Chintaguntla 
Authored: Tue Dec 12 10:13:29 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 22:24:48 2018 +

--
 .../wal/WALReplayWithIndexWritesAndCompressedWALIT.java| 2 ++
 1 file changed, 2 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/6add7973/phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
 
b/phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
index a7f17ec..542e640 100644
--- 
a/phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/hadoop/hbase/regionserver/wal/WALReplayWithIndexWritesAndCompressedWALIT.java
@@ -60,6 +60,7 @@ import org.apache.phoenix.hbase.index.covered.ColumnGroup;
 import org.apache.phoenix.hbase.index.covered.CoveredColumn;
 import 
org.apache.phoenix.hbase.index.covered.CoveredColumnIndexSpecifierBuilder;
 import org.apache.phoenix.hbase.index.util.TestIndexManagementUtil;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.ConfigUtil;
 import org.junit.After;
 import org.junit.Before;
@@ -100,6 +101,7 @@ public class WALReplayWithIndexWritesAndCompressedWALIT {
 setupCluster();
 Path hbaseRootDir = UTIL.getDataTestDir();
 this.conf = HBaseConfiguration.create(UTIL.getConfiguration());
+this.conf.setBoolean(QueryServices.INDEX_FAILURE_THROW_EXCEPTION_ATTRIB, 
false);
 this.fs = UTIL.getDFSCluster().getFileSystem();
 this.hbaseRootDir = new Path(this.conf.get(HConstants.HBASE_DIR));
 this.oldLogDir = new Path(this.hbaseRootDir, 
HConstants.HREGION_OLDLOGDIR_NAME);



[2/2] phoenix git commit: PHOENIX-4488 Cache config parameters for MetaDataEndPointImpl during initialization

2018-01-31 Thread pboado
PHOENIX-4488 Cache config parameters for MetaDataEndPointImpl during 
initialization


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/afe21dc7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/afe21dc7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/afe21dc7

Branch: refs/heads/4.x-HBase-1.2
Commit: afe21dc72475aebd81a97d347c966dfc69cd5a9a
Parents: eb9de14
Author: James Taylor 
Authored: Fri Dec 22 19:36:44 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 20:56:21 2018 +

--
 .../coprocessor/MetaDataEndpointImplTest.java   | 44 
 .../coprocessor/MetaDataEndpointImpl.java   | 30 ++---
 2 files changed, 16 insertions(+), 58 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/afe21dc7/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
deleted file mode 100644
index 2c558d8..000
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/coprocessor/MetaDataEndpointImplTest.java
+++ /dev/null
@@ -1,44 +0,0 @@
-/**
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- * http://www.apache.org/licenses/LICENSE-2.0
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.coprocessor;
-
-import com.google.common.collect.Lists;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.phoenix.query.QueryServices;
-import org.apache.phoenix.schema.PTable;
-import org.apache.phoenix.schema.PTableType;
-import org.junit.Test;
-
-import java.util.List;
-
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertTrue;
-import static org.mockito.Mockito.mock;
-import static org.mockito.Mockito.when;
-
-public class MetaDataEndpointImplTest {
-
-@Test
-public void testExceededIndexQuota() throws Exception {
-PTable parentTable = mock(PTable.class);
-List indexes = Lists.newArrayList(mock(PTable.class), 
mock(PTable.class));
-when(parentTable.getIndexes()).thenReturn(indexes);
-Configuration configuration = new Configuration();
-assertFalse(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
-configuration.setInt(QueryServices.MAX_INDEXES_PER_TABLE, 1);
-assertTrue(MetaDataEndpointImpl.execeededIndexQuota(PTableType.INDEX, 
parentTable, configuration));
-}
-}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/afe21dc7/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index bf8ba39..47ad7cf 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -242,7 +242,6 @@ import org.apache.phoenix.util.UpgradeUtil;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.google.common.annotations.VisibleForTesting;
 import com.google.common.cache.Cache;
 import com.google.common.collect.Lists;
 import com.google.common.collect.Maps;
@@ -472,6 +471,10 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 
 private PhoenixMetaDataCoprocessorHost phoenixAccessCoprocessorHost;
 private boolean accessCheckEnabled;
+private boolean blockWriteRebuildIndex;
+private int maxIndexesPerTable;
+private boolean isTablesMappingEnabled;
+
 
 /**
  * Stores a reference to the coprocessor environment provided by the
@@ -492,8 +495,16 @@ public class MetaDataEndpointImpl extends MetaDataProtocol 
implements Coprocesso
 }
 
 phoenixAc

[1/2] phoenix git commit: PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2018-01-31 Thread pboado
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 878a264e5 -> afe21dc72


PHOENIX-4437 Make QueryPlan.getEstimatedBytesToScan() independent of 
getExplainPlan() and pull optimize() out of getExplainPlan()


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/eb9de14b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/eb9de14b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/eb9de14b

Branch: refs/heads/4.x-HBase-1.2
Commit: eb9de14b6b70a465c162b9928c4ae466deea3ee2
Parents: 878a264
Author: maryannxue 
Authored: Thu Dec 21 18:31:04 2017 +
Committer: Pedro Boado 
Committed: Wed Jan 31 20:55:59 2018 +

--
 .../end2end/ExplainPlanWithStatsEnabledIT.java  |  2 +-
 .../apache/phoenix/execute/BaseQueryPlan.java   | 45 ++
 .../apache/phoenix/execute/HashJoinPlan.java| 59 +-
 .../phoenix/execute/SortMergeJoinPlan.java  | 63 ++--
 .../org/apache/phoenix/execute/UnionPlan.java   | 53 
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  9 ++-
 6 files changed, 119 insertions(+), 112 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/eb9de14b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
index 49efa97..f13510b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ExplainPlanWithStatsEnabledIT.java
@@ -298,7 +298,7 @@ public class ExplainPlanWithStatsEnabledIT extends 
ParallelStatsEnabledIT {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 conn.setAutoCommit(false);
 Estimate info = getByteRowEstimates(conn, sql, binds);
-assertEquals((Long) 200l, info.estimatedBytes);
+assertEquals((Long) 176l, info.estimatedBytes);
 assertEquals((Long) 2l, info.estimatedRows);
 assertTrue(info.estimateInfoTs > 0);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/eb9de14b/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java 
b/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
index 31f67b7..380037f 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/execute/BaseQueryPlan.java
@@ -117,7 +117,7 @@ public abstract class BaseQueryPlan implements QueryPlan {
 protected Long estimatedRows;
 protected Long estimatedSize;
 protected Long estimateInfoTimestamp;
-private boolean explainPlanCalled;
+private boolean getEstimatesCalled;
 
 
 protected BaseQueryPlan(
@@ -498,32 +498,17 @@ public abstract class BaseQueryPlan implements QueryPlan {
 
 @Override
 public ExplainPlan getExplainPlan() throws SQLException {
-explainPlanCalled = true;
 if (context.getScanRanges() == ScanRanges.NOTHING) {
 return new ExplainPlan(Collections.singletonList("DEGENERATE SCAN 
OVER " + getTableRef().getTable().getName().getString()));
 }
 
-// If cost-based optimizer is enabled, we need to initialize a dummy 
iterator to
-// get the stats for computing costs.
-boolean costBased =
-
context.getConnection().getQueryServices().getConfiguration().getBoolean(
-QueryServices.COST_BASED_OPTIMIZER_ENABLED, 
QueryServicesOptions.DEFAULT_COST_BASED_OPTIMIZER_ENABLED);
-if (costBased) {
-ResultIterator iterator = iterator();
-iterator.close();
-}
-// Optimize here when getting explain plan, as queries don't get 
optimized until after compilation
-QueryPlan plan = 
context.getConnection().getQueryServices().getOptimizer().optimize(context.getStatement(),
 this);
-ExplainPlan exp = plan instanceof BaseQueryPlan ? new 
ExplainPlan(getPlanSteps(plan.iterator())) : plan.getExplainPlan();
-if (!costBased) { // do not override estimates if they are used for 
cost calculation.
-this.estimatedRows = plan.getEstimatedRowsToScan();
-this.estimatedSize = plan.getEstimatedBytesToScan();
-this.estimateInfoTimestamp = plan.getEstimateInfoTimestamp();
-}
-return exp;
+ResultIterator iterator = iterator();

[1/2] phoenix git commit: Revert "PHOENIX-4130 Avoid server retries for mutable indexes"

2018-01-31 Thread vincentpoon
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 d1241a09c -> 878a264e5


http://git-wip-us.apache.org/repos/asf/phoenix/blob/878a264e/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
index bc2b625..cd23dc5 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/ServerUtil.java
@@ -119,25 +119,6 @@ public class ServerUtil {
 }
 return new PhoenixIOException(t);
 }
-
-/**
- * Return the first SQLException in the exception chain, otherwise parse 
it.
- * When we're receiving an exception locally, there's no need to string 
parse,
- * as the SQLException will already be part of the chain.
- * @param t
- * @return the SQLException, or null if none found
- */
-public static SQLException parseLocalOrRemoteServerException(Throwable t) {
-while (t.getCause() != null) {
-if (t instanceof NotServingRegionException) {
-return parseRemoteException(new 
StaleRegionBoundaryCacheException());
-} else if (t instanceof SQLException) {
-return (SQLException) t;
-}
-t = t.getCause();
-}
-return parseRemoteException(t);
-}
 
 public static SQLException parseServerExceptionOrNull(Throwable t) {
 while (t.getCause() != null) {
@@ -215,7 +196,7 @@ public class ServerUtil {
 return parseTimestampFromRemoteException(t);
 }
 
-public static long parseTimestampFromRemoteException(Throwable t) {
+private static long parseTimestampFromRemoteException(Throwable t) {
 String message = t.getLocalizedMessage();
 if (message != null) {
 // If the message matches the standard pattern, recover the 
SQLException and throw it.
@@ -235,7 +216,7 @@ public class ServerUtil {
 msg = "";
 }
 if (t instanceof SQLException) {
-msg = t.getMessage() + " " + msg;
+msg = constructSQLErrorMessage((SQLException) t, msg);
 }
 msg += String.format(FORMAT_FOR_TIMESTAMP, timestamp);
 return new DoNotRetryIOException(msg, t);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/878a264e/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
index 918c411..b0e3780 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
@@ -105,10 +105,6 @@ public class TestIndexWriter {
 Configuration conf =new Configuration();
 Mockito.when(e.getConfiguration()).thenReturn(conf);
 Mockito.when(e.getSharedData()).thenReturn(new 
ConcurrentHashMap());
-Region mockRegion = Mockito.mock(Region.class);
-Mockito.when(e.getRegion()).thenReturn(mockRegion);
-HTableDescriptor mockTableDesc = Mockito.mock(HTableDescriptor.class);
-Mockito.when(mockRegion.getTableDesc()).thenReturn(mockTableDesc);
 ExecutorService exec = Executors.newFixedThreadPool(1);
 Map tables = new 
HashMap();
 FakeTableFactory factory = new FakeTableFactory(tables);
@@ -165,10 +161,6 @@ public class TestIndexWriter {
 Configuration conf =new Configuration();
 Mockito.when(e.getConfiguration()).thenReturn(conf);
 Mockito.when(e.getSharedData()).thenReturn(new 
ConcurrentHashMap());
-Region mockRegion = Mockito.mock(Region.class);
-Mockito.when(e.getRegion()).thenReturn(mockRegion);
-HTableDescriptor mockTableDesc = Mockito.mock(HTableDescriptor.class);
-Mockito.when(mockRegion.getTableDesc()).thenReturn(mockTableDesc);
 FakeTableFactory factory = new FakeTableFactory(tables);
 
 byte[] tableName = this.testName.getTableName();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/878a264e/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
index bfe1d0d..3e2b47c 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
@@ -30,13 +30,11 @@ import org.ap

[2/2] phoenix git commit: Revert "PHOENIX-4130 Avoid server retries for mutable indexes"

2018-01-31 Thread vincentpoon
Revert "PHOENIX-4130 Avoid server retries for mutable indexes"

This reverts commit d1241a09c24925a46c6a0e64252d0bbbcd991c58.


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/878a264e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/878a264e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/878a264e

Branch: refs/heads/4.x-HBase-1.2
Commit: 878a264e5d1f6316d611be1f31a4bcde620c4c8e
Parents: d1241a0
Author: Vincent Poon 
Authored: Wed Jan 31 10:09:54 2018 -0800
Committer: Vincent Poon 
Committed: Wed Jan 31 10:09:54 2018 -0800

--
 .../end2end/index/MutableIndexFailureIT.java|  12 +-
 .../end2end/index/PartialIndexRebuilderIT.java  |  76 ++--
 .../coprocessor/MetaDataEndpointImpl.java   |  53 ++
 .../phoenix/coprocessor/MetaDataProtocol.java   |   6 +-
 .../coprocessor/MetaDataRegionObserver.java |  19 +-
 .../UngroupedAggregateRegionObserver.java   |  82 ++--
 .../phoenix/exception/SQLExceptionCode.java |   1 -
 .../apache/phoenix/execute/MutationState.java   |  39 +---
 .../org/apache/phoenix/hbase/index/Indexer.java |  10 +
 .../index/exception/IndexWriteException.java|  49 +
 .../MultiIndexWriteFailureException.java|  29 +--
 .../SingleIndexWriteFailureException.java   |  23 +--
 .../hbase/index/write/IndexWriterUtils.java |  14 +-
 .../write/ParallelWriterIndexCommitter.java |   5 +-
 .../TrackingParallelWriterIndexCommitter.java   |   5 +-
 .../index/PhoenixIndexFailurePolicy.java| 189 ++-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   1 -
 .../apache/phoenix/optimize/QueryOptimizer.java |  29 +--
 .../org/apache/phoenix/query/QueryServices.java |   2 -
 .../phoenix/query/QueryServicesOptions.java |   1 -
 .../org/apache/phoenix/schema/PIndexState.java  |   7 +-
 .../org/apache/phoenix/util/KeyValueUtil.java   |  12 --
 .../org/apache/phoenix/util/ServerUtil.java |  23 +--
 .../hbase/index/write/TestIndexWriter.java  |   8 -
 .../index/write/TestParalleIndexWriter.java |   6 -
 .../write/TestParalleWriterIndexCommitter.java  |   6 -
 26 files changed, 116 insertions(+), 591 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/878a264e/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
index c2e0cb6..0318925 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
@@ -29,6 +29,7 @@ import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
+import java.util.Collections;
 import java.util.List;
 import java.util.Map;
 import java.util.Properties;
@@ -129,6 +130,7 @@ public class MutableIndexFailureIT extends BaseTest {
 public static void doSetup() throws Exception {
 Map serverProps = Maps.newHashMapWithExpectedSize(10);
 serverProps.put("hbase.coprocessor.region.classes", 
FailingRegionObserver.class.getName());
+serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_RETRIES_NUMBER, "2");
 serverProps.put(HConstants.HBASE_RPC_TIMEOUT_KEY, "1");
 serverProps.put(IndexWriterUtils.INDEX_WRITER_RPC_PAUSE, "5000");
 serverProps.put("data.tx.snapshot.dir", "/tmp");
@@ -142,8 +144,7 @@ public class MutableIndexFailureIT extends BaseTest {
  * because we want to control it's execution ourselves
  */
 serverProps.put(QueryServices.INDEX_REBUILD_TASK_INITIAL_DELAY, 
Long.toString(Long.MAX_VALUE));
-Map clientProps = Maps.newHashMapWithExpectedSize(2);
-clientProps.put(HConstants.HBASE_CLIENT_RETRIES_NUMBER, "2");
+Map clientProps = 
Collections.singletonMap(QueryServices.TRANSACTIONS_ENABLED, 
Boolean.TRUE.toString());
 NUM_SLAVES_BASE = 4;
 setUpTestDriver(new ReadOnlyProps(serverProps.entrySet().iterator()), 
new ReadOnlyProps(clientProps.entrySet().iterator()));
 indexRebuildTaskRegionEnvironment =
@@ -160,8 +161,7 @@ public class MutableIndexFailureIT extends BaseTest {
 @Parameters(name = 
"MutableIndexFailureIT_transactional={0},localIndex={1},isNamespaceMapped={2},disableIndexOnWriteFailure={3},failRebuildTask={4},throwIndexWriteFailure={5}")
 // name is used by failsafe as file name in reports
 public static List data() {
 return Arrays.asList(new Object[][] { 
-// note - can't disableIndexOnWriteFailure without 
throwIndexWrite

Build failed in Jenkins: Phoenix Compile Compatibility with HBase #534

2018-01-31 Thread Apache Jenkins Server
See 


--
[...truncated 39.70 KB...]
[ERROR] 
:[364,5]
 method does not override or implement a method from a supertype
[ERROR] 
:[370,5]
 method does not override or implement a method from a supertype
[ERROR] 
:[376,5]
 method does not override or implement a method from a supertype
[ERROR] 
:[382,5]
 method does not override or implement a method from a supertype
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure: Compilation failure: 
[ERROR] 
:[34,39]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: package org.apache.hadoop.hbase.metrics
[ERROR] 
:[144,16]
 cannot find symbol
[ERROR]   symbol:   class MetricRegistry
[ERROR]   location: class 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment
[ERROR] 
:[24,35]
 cannot find symbol
[ERROR]   symbol:   class DelegatingHBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[25,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[37,37]
 cannot find symbol
[ERROR]   symbol: class DelegatingHBaseRpcController
[ERROR] 
:[56,38]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.MetadataRpcController
[ERROR] 
:[26,35]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: package org.apache.hadoop.hbase.ipc
[ERROR] 
:[40,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[46,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[52,12]
 cannot find symbol
[ERROR]   symbol:   class HBaseRpcController
[ERROR]   location: class 
org.apache.hadoop.hbase.ipc.controller.InterRegionServerMetadataRpcControllerFactory
[ERROR] 
:[57,46]
 cannot find