[phoenix-queryserver] 01/01: Initial Commit

2019-01-02 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git

commit 87ad7074620b9321ebe36b0b2fd3176bceaa6000
Author: Karan Mehta 
AuthorDate: Wed Jan 2 16:37:11 2019 -0800

Initial Commit
---
 .gitignore | 29 +
 1 file changed, 29 insertions(+)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..e3c6527
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,29 @@
+#general java
+*.class
+*.war
+*.jar
+
+# python
+*.pyc
+.checkstyle
+
+# eclipse stuffs
+.settings/*
+*/.settings/
+.classpath
+.project
+*/.externalToolBuilders
+*/maven-eclipse.xml
+
+# intellij stuff
+.idea/
+*.iml
+*.ipr
+*.iws
+
+#maven stuffs
+target/
+release/
+RESULTS/
+CSV_EXPORT/
+.DS_Store
\ No newline at end of file



[phoenix-queryserver] branch 4.x-HBase-1.4 created (now 87ad707)

2019-01-02 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git.


  at 87ad707  Initial Commit

This branch includes the following new commits:

 new 87ad707  Initial Commit

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[phoenix-queryserver] 01/01: PHOENIX-5063 Specify phoenix version via property

2019-01-09 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git

commit 0543cf143aff79b5353182e4cdc93086d5bc9bba
Author: Karan Mehta 
AuthorDate: Wed Jan 9 11:51:24 2019 -0800

PHOENIX-5063 Specify phoenix version via property
---
 pom.xml | 9 ++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/pom.xml b/pom.xml
index da10477..29f0983 100644
--- a/pom.xml
+++ b/pom.xml
@@ -6,7 +6,7 @@
 
 org.apache.phoenix
 phoenix-queryserver
-4.15.0-HBase-1.4-SNAPSHOT
+1.0.0-SNAPSHOT
 pom
 Phoenix Query Server
 
@@ -46,6 +46,9 @@
 
 ${project.basedir}
 
+
+4.14.1-HBase-1.4
+
 
 1.4.0
 2.7.5
@@ -385,12 +388,12 @@
 
 org.apache.phoenix
 phoenix-core
-${project.version}
+${phoenix.version}
 
 
 org.apache.phoenix
 phoenix-core
-${project.version}
+${phoenix.version}
 tests
 test
 



[phoenix-queryserver] branch 4.x-HBase-1.4 updated (87ad707 -> 0543cf1)

2019-01-09 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git.


from 87ad707  Initial Commit
 add eeb3849  PHOENIX-5063 Create a new repo for the phoenix query server
 add 2f7bfb6  PHOENIX-5063 Update scm tag values
 add fa8b846  PHOENIX-5063 Removed unused properties from pom.xml
 new 0543cf1  PHOENIX-5063 Specify phoenix version via property

The 1 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 pom.xml| 551 +++
 queryserver-client/pom.xml | 203 +++
 .../apache/phoenix/queryserver/client/Driver.java  |  49 ++
 .../phoenix/queryserver/client/SqllineWrapper.java |  97 
 .../phoenix/queryserver/client/ThinClientUtil.java |  42 ++
 .../resources/META-INF/services/java.sql.Driver|   1 +
 .../org-apache-phoenix-remote-jdbc.properties  |  25 +
 queryserver/pom.xml| 188 +++
 queryserver/src/build/query-server-runnable.xml|  52 ++
 queryserver/src/it/bin/test_phoenixdb.py   |  39 ++
 queryserver/src/it/bin/test_phoenixdb.sh   |  79 +++
 .../HttpParamImpersonationQueryServerIT.java   | 438 +++
 .../phoenix/end2end/QueryServerBasicsIT.java   | 346 
 .../phoenix/end2end/QueryServerTestUtil.java   | 187 +++
 .../apache/phoenix/end2end/QueryServerThread.java  |  45 ++
 .../phoenix/end2end/SecureQueryServerIT.java   | 323 +++
 .../end2end/SecureQueryServerPhoenixDBIT.java  | 424 ++
 .../phoenix/end2end/ServerCustomizersIT.java   | 149 +
 queryserver/src/it/resources/log4j.properties  |  68 +++
 .../service/LoadBalanceZookeeperConf.java  |  42 ++
 .../phoenix/queryserver/register/Registry.java |  48 ++
 .../server/AvaticaServerConfigurationFactory.java  |  37 ++
 .../queryserver/server/PhoenixMetaFactory.java |  28 +
 .../queryserver/server/PhoenixMetaFactoryImpl.java |  76 +++
 .../phoenix/queryserver/server/QueryServer.java| 606 +
 .../server/RemoteUserExtractorFactory.java |  36 ++
 .../server/ServerCustomizersFactory.java   |  52 ++
 .../org/apache/phoenix/DriverCohabitationTest.java |  65 +++
 .../CustomAvaticaServerConfigurationTest.java  |  37 ++
 .../server/PhoenixDoAsCallbackTest.java|  89 +++
 .../server/PhoenixRemoteUserExtractorTest.java | 108 
 .../server/QueryServerConfigurationTest.java   |  92 
 .../server/RemoteUserExtractorFactoryTest.java |  35 ++
 .../queryserver/server/ServerCustomizersTest.java  |  92 
 34 files changed, 4749 insertions(+)
 create mode 100644 pom.xml
 create mode 100644 queryserver-client/pom.xml
 create mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
 create mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/SqllineWrapper.java
 create mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
 create mode 100644 
queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
 create mode 100644 
queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
 create mode 100644 queryserver/pom.xml
 create mode 100644 queryserver/src/build/query-server-runnable.xml
 create mode 100644 queryserver/src/it/bin/test_phoenixdb.py
 create mode 100755 queryserver/src/it/bin/test_phoenixdb.sh
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/HttpParamImpersonationQueryServerIT.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerTestUtil.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerThread.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerIT.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerPhoenixDBIT.java
 create mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/ServerCustomizersIT.java
 create mode 100644 queryserver/src/it/resources/log4j.properties
 create mode 100644 
queryserver/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalanceZookeeperConf.java
 create mode 100644 
queryserver/src/main/java/org/apache/phoenix/queryserver/register/Registry.java
 create mode 100644 
queryserver/src/main/java/org/apache/phoenix/queryserver/server/AvaticaServerConfigurationFactory.java
 create mode 100644 
queryserver/src/main/java/org/apache

[phoenix-queryserver] branch 4.x-HBase-1.4 updated (0543cf1 -> 87ad707)

2019-01-16 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git.


 discard 0543cf1  PHOENIX-5063 Specify phoenix version via property
omit fa8b846  PHOENIX-5063 Removed unused properties from pom.xml
omit 2f7bfb6  PHOENIX-5063 Update scm tag values
omit eeb3849  PHOENIX-5063 Create a new repo for the phoenix query server

This update removed existing revisions from the reference, leaving the
reference pointing at a previous point in the repository history.

 * -- * -- N   refs/heads/4.x-HBase-1.4 (87ad707)
\
 O -- O -- O   (0543cf1)

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 pom.xml| 551 ---
 queryserver-client/pom.xml | 203 ---
 .../apache/phoenix/queryserver/client/Driver.java  |  49 --
 .../phoenix/queryserver/client/SqllineWrapper.java |  97 
 .../phoenix/queryserver/client/ThinClientUtil.java |  42 --
 .../resources/META-INF/services/java.sql.Driver|   1 -
 .../org-apache-phoenix-remote-jdbc.properties  |  25 -
 queryserver/pom.xml| 188 ---
 queryserver/src/build/query-server-runnable.xml|  52 --
 queryserver/src/it/bin/test_phoenixdb.py   |  39 --
 queryserver/src/it/bin/test_phoenixdb.sh   |  79 ---
 .../HttpParamImpersonationQueryServerIT.java   | 438 ---
 .../phoenix/end2end/QueryServerBasicsIT.java   | 346 
 .../phoenix/end2end/QueryServerTestUtil.java   | 187 ---
 .../apache/phoenix/end2end/QueryServerThread.java  |  45 --
 .../phoenix/end2end/SecureQueryServerIT.java   | 323 ---
 .../end2end/SecureQueryServerPhoenixDBIT.java  | 424 --
 .../phoenix/end2end/ServerCustomizersIT.java   | 149 -
 queryserver/src/it/resources/log4j.properties  |  68 ---
 .../service/LoadBalanceZookeeperConf.java  |  42 --
 .../phoenix/queryserver/register/Registry.java |  48 --
 .../server/AvaticaServerConfigurationFactory.java  |  37 --
 .../queryserver/server/PhoenixMetaFactory.java |  28 -
 .../queryserver/server/PhoenixMetaFactoryImpl.java |  76 ---
 .../phoenix/queryserver/server/QueryServer.java| 606 -
 .../server/RemoteUserExtractorFactory.java |  36 --
 .../server/ServerCustomizersFactory.java   |  52 --
 .../org/apache/phoenix/DriverCohabitationTest.java |  65 ---
 .../CustomAvaticaServerConfigurationTest.java  |  37 --
 .../server/PhoenixDoAsCallbackTest.java|  89 ---
 .../server/PhoenixRemoteUserExtractorTest.java | 108 
 .../server/QueryServerConfigurationTest.java   |  92 
 .../server/RemoteUserExtractorFactoryTest.java |  35 --
 .../queryserver/server/ServerCustomizersTest.java  |  92 
 34 files changed, 4749 deletions(-)
 delete mode 100644 pom.xml
 delete mode 100644 queryserver-client/pom.xml
 delete mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
 delete mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/SqllineWrapper.java
 delete mode 100644 
queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
 delete mode 100644 
queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
 delete mode 100644 
queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
 delete mode 100644 queryserver/pom.xml
 delete mode 100644 queryserver/src/build/query-server-runnable.xml
 delete mode 100644 queryserver/src/it/bin/test_phoenixdb.py
 delete mode 100755 queryserver/src/it/bin/test_phoenixdb.sh
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/HttpParamImpersonationQueryServerIT.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerBasicsIT.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerTestUtil.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/QueryServerThread.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerIT.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerPhoenixDBIT.java
 delete mode 100644 
queryserver/src/it/java/org/apache/phoenix/end2end/ServerCustomizersIT.java
 delete mode 100644 queryserver/src/it/resources/log4j.properties
 delete mode 100644 
queryserver/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalanceZookeeperConf.java
 delete mode 100644 
queryserver/src/main/java/org/apache/phoenix/queryserver/register/Registry.java
 delete mode 100644 
queryserv

[phoenix-queryserver] branch master created (now 87ad707)

2019-01-16 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git.


  at 87ad707  Initial Commit

No new revisions were added by this update.



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-4009 Run UPDATE STATISTICS command by using MR integration on snapshots

2019-01-17 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new e3280f6  PHOENIX-4009 Run UPDATE STATISTICS command by using MR 
integration on snapshots
e3280f6 is described below

commit e3280f6e8738b78fe865e4a717ef96c66a31b03f
Author: karanmehta93 
AuthorDate: Thu Jan 17 15:03:50 2019 -0800

PHOENIX-4009 Run UPDATE STATISTICS command by using MR integration on 
snapshots
---
 phoenix-core/pom.xml   |   8 +
 ...olumnEncodedImmutableNonTxStatsCollectorIT.java |  39 ---
 .../ColumnEncodedImmutableTxStatsCollectorIT.java  |  42 ---
 .../ColumnEncodedMutableTxStatsCollectorIT.java|  41 ---
 ...java => NamespaceDisabledStatsCollectorIT.java} |  27 +-
 java => NamespaceEnabledStatsCollectorIT.java} |  46 ++-
 ...onColumnEncodedImmutableTxStatsCollectorIT.java |  42 ---
 ...CollectorIT.java => NonTxStatsCollectorIT.java} |  25 +-
 .../apache/phoenix/end2end/TxStatsCollectorIT.java |  52 
 ...sCollectorIT.java => BaseStatsCollectorIT.java} | 228 +++---
 .../UngroupedAggregateRegionObserver.java  |  19 +-
 .../apache/phoenix/iterate/SnapshotScanner.java|  67 +++-
 .../iterate/TableSnapshotResultIterator.java   |  13 +
 .../phoenix/mapreduce/PhoenixInputFormat.java  |  32 +-
 .../phoenix/mapreduce/PhoenixRecordReader.java |   5 +-
 .../mapreduce/util/PhoenixConfigurationUtil.java   |  40 ++-
 .../mapreduce/util/PhoenixMapReduceUtil.java   |  13 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  16 +-
 .../schema/stats/DefaultStatisticsCollector.java   | 344 ++---
 .../schema/stats/NoOpStatisticsCollector.java  |  21 +-
 .../phoenix/schema/stats/StatisticsCollector.java  |  21 +-
 .../schema/stats/StatisticsCollectorFactory.java   |  11 +-
 .../phoenix/schema/stats/StatisticsScanner.java|   3 +-
 .../phoenix/schema/stats/StatisticsUtil.java   |  23 +-
 .../phoenix/schema/stats/StatisticsWriter.java |  65 +++-
 .../phoenix/schema/stats/UpdateStatisticsTool.java | 223 +
 .../java/org/apache/phoenix/util/ServerUtil.java   |   6 +-
 .../util/PhoenixConfigurationUtilTest.java |  17 +
 .../phoenix/pig/util/PhoenixPigSchemaUtil.java |   4 +-
 pom.xml|  10 +
 30 files changed, 919 insertions(+), 584 deletions(-)

diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 018d054..3007d3b 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -399,6 +399,14 @@
 
 
   org.apache.hbase
+  hbase-metrics-api
+
+
+  org.apache.hbase
+  hbase-metrics
+
+
+  org.apache.hbase
   hbase-common
   test
   test-jar
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedImmutableNonTxStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedImmutableNonTxStatsCollectorIT.java
deleted file mode 100644
index 0481ba5..000
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedImmutableNonTxStatsCollectorIT.java
+++ /dev/null
@@ -1,39 +0,0 @@
-/*
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.end2end;
-
-import java.util.Arrays;
-import java.util.Collection;
-
-import org.apache.phoenix.schema.stats.StatsCollectorIT;
-import org.junit.runners.Parameterized.Parameters;
-
-public class ColumnEncodedImmutableNonTxStatsCollectorIT extends 
StatsCollectorIT {
-
-public ColumnEncodedImmutableNonTxStatsCollectorIT(boolean mutable, String 
transactionProvider,
-boolean userTableNamespaceMapped, boolean columnEncoded) {
-super(mutable, transactionProvider, userTableNamespaceMapped, 
columnEncoded);
-}
-
-@Parameters(name = 
"mutable={0},transactionProvider={1},isUserTableNamespaceMapped={2},columnEncoded={3}")
-public static Collection data() {
-return Arrays.asList(new Object[][] { 
-{ false, null, false, true }, { false, null

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)

2019-01-28 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 03dd0d1  PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)
03dd0d1 is described below

commit 03dd0d1434bbc9fff5c3bd77e94a54411fd07528
Author: karanmehta93 
AuthorDate: Mon Jan 28 11:45:04 2019 -0800

PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)
---
 .../phoenix/schema/stats/BaseStatsCollectorIT.java |  65 +++---
 .../org/apache/phoenix/query/GuidePostsCache.java  |   5 +-
 .../phoenix/schema/stats/StatisticsUtil.java   |   3 +-
 .../phoenix/schema/stats/UpdateStatisticsTool.java | 142 -
 .../schema/stats/UpdateStatisticsToolTest.java |  76 +++
 5 files changed, 235 insertions(+), 56 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
index 78c4faf..2344bd0 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
@@ -27,6 +27,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.sql.Array;
@@ -40,20 +41,21 @@ import java.util.Map;
 import java.util.Properties;
 import java.util.Random;
 
+import com.google.common.collect.Lists;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
 import org.apache.hadoop.hbase.HColumnDescriptor;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
-import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
+import 
org.apache.hadoop.hbase.protobuf.generated.HBaseProtos.SnapshotDescription;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.hadoop.mapreduce.Job;
 import org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver;
@@ -190,11 +192,8 @@ public abstract class BaseStatsCollectorIT extends 
BaseUniqueNamesOwnClusterIT {
 private void collectStatistics(Connection conn, String fullTableName,
String guidePostWidth) throws Exception {
 
-String localPhysicalTableName = 
SchemaUtil.getPhysicalTableName(fullTableName.getBytes(),
-userTableNamespaceMapped).getNameAsString();
-
 if (collectStatsOnSnapshot) {
-collectStatsOnSnapshot(conn, fullTableName, guidePostWidth, 
localPhysicalTableName);
+collectStatsOnSnapshot(conn, fullTableName, guidePostWidth);
 invalidateStats(conn, fullTableName);
 } else {
 String updateStatisticsSql = "UPDATE STATISTICS " + fullTableName;
@@ -207,20 +206,44 @@ public abstract class BaseStatsCollectorIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 private void collectStatsOnSnapshot(Connection conn, String fullTableName,
-String guidePostWidth, String 
localPhysicalTableName) throws Exception {
-UpdateStatisticsTool tool = new UpdateStatisticsTool();
-Configuration conf = utility.getConfiguration();
-HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-String snapshotName = "UpdateStatisticsTool_" + generateUniqueName();
-admin.snapshot(snapshotName, localPhysicalTableName);
-LOG.info("Successfully created snapshot " + snapshotName + " for " + 
localPhysicalTableName);
-Path randomDir = getUtility().getRandomDir();
+String guidePostWidth) throws 
Exception {
 if (guidePostWidth != null) {
 conn.createStatement().execute("ALTER TABLE " + fullTableName + " 
SET GUIDE_POSTS_WIDTH = " + guidePostWidth);
 }
-Job job = tool.configureJob(conf, fullTableName, snapshotName, 
randomDir);
-assertEquals(job.getConfiguration().get(MAPREDUCE_JOB_TYPE), 
UPDATE_STATS.name());
-tool.runJob(job, true);
+runUpdateStatisticsTool(fullTableNa

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-4009 Run UPDATE STATISTICS command by using MR integration on snapshots (Addendum) (#433)

2019-01-30 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 522a363  PHOENIX-4009 Run UPDATE STATISTICS command by using MR 
integration on snapshots (Addendum) (#433)
522a363 is described below

commit 522a3630a4cb8836cb7afc84b96a74e5d8fe06cb
Author: karanmehta93 
AuthorDate: Wed Jan 30 12:37:48 2019 -0800

PHOENIX-4009 Run UPDATE STATISTICS command by using MR integration on 
snapshots (Addendum) (#433)
---
 .../phoenix/end2end/ParallelStatsDisabledIT.java   |  16 ++-
 .../apache/phoenix/end2end/SpillableGroupByIT.java |   2 +-
 .../phoenix/schema/stats/BaseStatsCollectorIT.java |  14 +-
 .../stats}/NamespaceDisabledStatsCollectorIT.java  |   3 +-
 .../stats}/NamespaceEnabledStatsCollectorIT.java   |   3 +-
 .../phoenix/schema/stats/NoOpStatsCollectorIT.java | 143 +
 .../stats}/NonTxStatsCollectorIT.java  |   3 +-
 .../stats}/TxStatsCollectorIT.java |   3 +-
 .../UngroupedAggregateRegionObserver.java  |  19 ++-
 .../apache/phoenix/exception/SQLExceptionCode.java |   2 +
 .../org/apache/phoenix/query/GuidePostsCache.java  |   4 +-
 .../org/apache/phoenix/query/QueryServices.java|   3 -
 .../schema/stats/DefaultStatisticsCollector.java   |  14 ++
 .../schema/stats/NoOpStatisticsCollector.java  |   9 ++
 .../phoenix/schema/stats/StatisticsCollector.java  |  10 ++
 .../schema/stats/StatisticsCollectorFactory.java   |   8 +-
 .../StatsCollectionDisabledOnServerException.java  |  33 +
 17 files changed, 244 insertions(+), 45 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
index 561aee5..8ea8dc8 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
@@ -18,8 +18,10 @@
 
 package org.apache.phoenix.end2end;
 
+import com.google.common.collect.Maps;
 import org.apache.commons.lang.StringUtils;
 import org.apache.phoenix.query.BaseTest;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.QueryBuilder;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
@@ -31,6 +33,7 @@ import java.sql.Connection;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.util.Map;
 
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -39,7 +42,13 @@ import static org.junit.Assert.fail;
 
 
 /**
- * Base class for tests whose methods run in parallel with statistics disabled.
+ * Base class for tests whose methods run in parallel with
+ * 1. Statistics enabled on server side 
(QueryServices#STATS_COLLECTION_ENABLED is true)
+ * 2. Guide Post Width for all relevant tables is 0. Stats are disabled at 
table level.
+ *
+ * See {@link org.apache.phoenix.schema.stats.NoOpStatsCollectorIT} for tests 
that disable
+ * stats collection from server side.
+ *
  * You must create unique names using {@link #generateUniqueName()} for each
  * table and sequence used to prevent collisions.
  */
@@ -47,8 +56,9 @@ import static org.junit.Assert.fail;
 public abstract class ParallelStatsDisabledIT extends BaseTest {
 
 @BeforeClass
-public static final void doSetup() throws Exception {
-setUpTestDriver(ReadOnlyProps.EMPTY_PROPS);
+public static void doSetup() throws Exception {
+Map props = Maps.newHashMapWithExpectedSize(1);
+setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
 }
 
 @AfterClass
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 21b2ac9..3ed09c6 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -76,7 +76,7 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 
 // Set guidepost width, but disable stats
 props.put(QueryServices.STATS_GUIDEPOST_WIDTH_BYTES_ATTRIB, 
Long.toString(20));
-props.put(QueryServices.STATS_ENABLED_ATTRIB, Boolean.toString(false));
+props.put(QueryServices.STATS_COLLECTION_ENABLED, 
Boolean.toString(false));
 props.put(QueryServices.EXPLAIN_CHUNK_COUNT_ATTRIB, 
Boolean.TRUE.toString());
 props.put(QueryServices.EXPLAIN_ROW_COUNT_ATTRIB, 
Boolean.TRUE.toString());
 // Must update config before starting server
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5091 Add new features to UpdateStatisticsTool (Addendum - Add license)

2019-01-30 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 45ea696  PHOENIX-5091 Add new features to UpdateStatisticsTool 
(Addendum - Add license)
45ea696 is described below

commit 45ea69616950f5aafe048ed07508c0ff17263c78
Author: Karan Mehta 
AuthorDate: Wed Jan 30 14:24:07 2019 -0800

PHOENIX-5091 Add new features to UpdateStatisticsTool (Addendum - Add 
license)
---
 .../phoenix/schema/stats/UpdateStatisticsToolTest.java  | 17 +
 1 file changed, 17 insertions(+)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/schema/stats/UpdateStatisticsToolTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/schema/stats/UpdateStatisticsToolTest.java
index 86f97ff..2262b0e 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/schema/stats/UpdateStatisticsToolTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/schema/stats/UpdateStatisticsToolTest.java
@@ -1,3 +1,20 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
 package org.apache.phoenix.schema.stats;
 
 import org.apache.hadoop.conf.Configuration;



[phoenix] branch master updated: PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)

2019-01-31 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 03eabeb  PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)
03eabeb is described below

commit 03eabebb12a317be201cdd8a2e5984420d755b15
Author: karanmehta93 
AuthorDate: Mon Jan 28 11:45:04 2019 -0800

PHOENIX-5091 Add new features to UpdateStatisticsTool (#430)
---
 .../phoenix/schema/stats/BaseStatsCollectorIT.java |  55 +---
 .../org/apache/phoenix/query/GuidePostsCache.java  |   1 -
 .../phoenix/schema/stats/UpdateStatisticsTool.java | 140 -
 .../schema/stats/UpdateStatisticsToolTest.java |  93 ++
 4 files changed, 239 insertions(+), 50 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
index 40e5c9b..26cd581 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
@@ -27,6 +27,7 @@ import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNotEquals;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 import java.io.IOException;
 import java.sql.Array;
@@ -43,12 +44,10 @@ import java.util.Random;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.conf.Configuration;
-import org.apache.hadoop.fs.Path;
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.Result;
 import org.apache.hadoop.hbase.client.ResultScanner;
@@ -191,11 +190,8 @@ public abstract class BaseStatsCollectorIT extends 
BaseUniqueNamesOwnClusterIT {
 private void collectStatistics(Connection conn, String fullTableName,
String guidePostWidth) throws Exception {
 
-String localPhysicalTableName = 
SchemaUtil.getPhysicalTableName(fullTableName.getBytes(),
-userTableNamespaceMapped).getNameAsString();
-
 if (collectStatsOnSnapshot) {
-collectStatsOnSnapshot(conn, fullTableName, guidePostWidth, 
localPhysicalTableName);
+collectStatsOnSnapshot(conn, fullTableName, guidePostWidth);
 invalidateStats(conn, fullTableName);
 } else {
 String updateStatisticsSql = "UPDATE STATISTICS " + fullTableName;
@@ -208,20 +204,43 @@ public abstract class BaseStatsCollectorIT extends 
BaseUniqueNamesOwnClusterIT {
 }
 
 private void collectStatsOnSnapshot(Connection conn, String fullTableName,
-String guidePostWidth, String 
localPhysicalTableName) throws Exception {
-UpdateStatisticsTool tool = new UpdateStatisticsTool();
-Configuration conf = utility.getConfiguration();
-Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
-String snapshotName = "UpdateStatisticsTool_" + generateUniqueName();
-admin.snapshot(snapshotName, 
TableName.valueOf(localPhysicalTableName));
-LOG.info("Successfully created snapshot " + snapshotName + " for " + 
localPhysicalTableName);
-Path randomDir = getUtility().getRandomDir();
+String guidePostWidth) throws 
Exception {
 if (guidePostWidth != null) {
 conn.createStatement().execute("ALTER TABLE " + fullTableName + " 
SET GUIDE_POSTS_WIDTH = " + guidePostWidth);
 }
-Job job = tool.configureJob(conf, fullTableName, snapshotName, 
randomDir);
-assertEquals(job.getConfiguration().get(MAPREDUCE_JOB_TYPE), 
UPDATE_STATS.name());
-tool.runJob(job, true);
+runUpdateStatisticsTool(fullTableName);
+}
+
+// Run UpdateStatisticsTool in foreground with manage snapshot option
+private void runUpdateStatisticsTool(String fullTableName) {
+UpdateStatisticsTool tool = new UpdateStatisticsTool();
+tool.setConf(utility.getConfiguration());
+String randomDir = getUtility().getRandomDir().toString();
+final String[] cmdArgs = getArgValues(fullTableName, randomDir);
+try {
+int status = tool.run(cmdArgs);
+assertEquals("MR Job should complete successfully", 0, status);
+HBas

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5091 Add new features to UpdateStatisticsTool (Fix failing tests)

2019-01-31 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 200d398  PHOENIX-5091 Add new features to UpdateStatisticsTool (Fix 
failing tests)
200d398 is described below

commit 200d39807ebc650385b7b20684d0169c07e92c77
Author: Karan Mehta 
AuthorDate: Wed Jan 30 16:05:14 2019 -0800

PHOENIX-5091 Add new features to UpdateStatisticsTool (Fix failing tests)
---
 .../it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java  | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
index f4540ee..fbd264b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/BaseStatsCollectorIT.java
@@ -241,8 +241,7 @@ public abstract class BaseStatsCollectorIT extends 
BaseUniqueNamesOwnClusterIT {
 args.add("-d");
 args.add(randomDir);
 args.add("-runfg");
-args.add("-cs");
-args.add("-ds");
+args.add("-ms");
 return args.toArray(new String[0]);
 }
 



[phoenix] branch master updated: PHOENIX-5121 Move unnecessary sorting and fetching out of loop

2019-02-06 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new db1a076  PHOENIX-5121 Move unnecessary sorting and fetching out of loop
db1a076 is described below

commit db1a07602732e0273d836ef940b03fa196ee7543
Author: Aman Poonia 
AuthorDate: Tue Feb 5 05:53:26 2019 +0530

PHOENIX-5121 Move unnecessary sorting and fetching out of loop
---
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java  | 56 +++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 805acc4..5427b5f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -739,23 +739,23 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 try {
 boolean isTenantSpecificConnection = connection.getTenantId() != null;
 List tuples = Lists.newArrayListWithExpectedSize(10);
+// Allow a "." in columnNamePattern for column family match
+String colPattern = null;
+String cfPattern = null;
+if (columnNamePattern != null && columnNamePattern.length() > 0) {
+int index = columnNamePattern.indexOf('.');
+if (index <= 0) {
+colPattern = columnNamePattern;
+} else {
+cfPattern = columnNamePattern.substring(0, index);
+if (columnNamePattern.length() > index+1) {
+colPattern = columnNamePattern.substring(index+1);
+}
+}
+}
 ResultSet rs = getTables(catalog, schemaPattern, tableNamePattern, 
null);
 while (rs.next()) {
 String schemaName = rs.getString(TABLE_SCHEM);
-// Allow a "." in columnNamePattern for column family match
-String colPattern = null;
-String cfPattern = null;
-if (columnNamePattern != null && columnNamePattern.length() > 0) {
-int index = columnNamePattern.indexOf('.');
-if (index <= 0) {
-colPattern = columnNamePattern;
-} else {
-cfPattern = columnNamePattern.substring(0, index);
-if (columnNamePattern.length() > index+1) {
-colPattern = columnNamePattern.substring(index+1);
-}
-}
-}
 String tableName = rs.getString(TABLE_NAME);
 String tenantId = rs.getString(TABLE_CAT);
 String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
@@ -1167,25 +1167,25 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 if (tableName == null || tableName.length() == 0) {
 return emptyResultSet;
 }
+String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
+boolean isSalted = table.getBucketNum() != null;
+boolean tenantColSkipped = false;
+List pkColumns = table.getPKColumns();
+List sorderPkColumns =
+Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
+// sort the columns by name
+Collections.sort(sorderPkColumns, new Comparator(){
+@Override public int compare(PColumn c1, PColumn c2) {
+return 
c1.getName().getString().compareTo(c2.getName().getString());
+}
+});
+
 try {
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaName, tableName, null);
 while (rs.next()) {
 String tenantId = rs.getString(TABLE_CAT);
-String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
-boolean isSalted = table.getBucketNum() != null;
-boolean tenantColSkipped = false;
-List pkColumns = table.getPKColumns();
-List sorderPkColumns =
-Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
-// sort the columns by name
-Collections.sort(sorderPkColumns, new Comparator(){
-@Override public int compare(PColumn c1, PColumn c2) {
-return 
c1.getName().getString().compareTo(c2.getName().getString());
-}
-});
-
   

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5121 Move unnecessary sorting and fetching out of loop

2019-02-06 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 59a0ba6  PHOENIX-5121 Move unnecessary sorting and fetching out of loop
59a0ba6 is described below

commit 59a0ba622d3279459cfa718cef01a5294f35c5cb
Author: Aman Poonia 
AuthorDate: Tue Feb 5 05:53:26 2019 +0530

PHOENIX-5121 Move unnecessary sorting and fetching out of loop
---
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java  | 56 +++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 61ba0fc..f747b90 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -721,23 +721,23 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 try {
 boolean isTenantSpecificConnection = connection.getTenantId() != null;
 List tuples = Lists.newArrayListWithExpectedSize(10);
+// Allow a "." in columnNamePattern for column family match
+String colPattern = null;
+String cfPattern = null;
+if (columnNamePattern != null && columnNamePattern.length() > 0) {
+int index = columnNamePattern.indexOf('.');
+if (index <= 0) {
+colPattern = columnNamePattern;
+} else {
+cfPattern = columnNamePattern.substring(0, index);
+if (columnNamePattern.length() > index+1) {
+colPattern = columnNamePattern.substring(index+1);
+}
+}
+}
 ResultSet rs = getTables(catalog, schemaPattern, tableNamePattern, 
null);
 while (rs.next()) {
 String schemaName = rs.getString(TABLE_SCHEM);
-// Allow a "." in columnNamePattern for column family match
-String colPattern = null;
-String cfPattern = null;
-if (columnNamePattern != null && columnNamePattern.length() > 0) {
-int index = columnNamePattern.indexOf('.');
-if (index <= 0) {
-colPattern = columnNamePattern;
-} else {
-cfPattern = columnNamePattern.substring(0, index);
-if (columnNamePattern.length() > index+1) {
-colPattern = columnNamePattern.substring(index+1);
-}
-}
-}
 String tableName = rs.getString(TABLE_NAME);
 String tenantId = rs.getString(TABLE_CAT);
 String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
@@ -1148,25 +1148,25 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 if (tableName == null || tableName.length() == 0) {
 return emptyResultSet;
 }
+String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
+boolean isSalted = table.getBucketNum() != null;
+boolean tenantColSkipped = false;
+List pkColumns = table.getPKColumns();
+List sorderPkColumns =
+Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
+// sort the columns by name
+Collections.sort(sorderPkColumns, new Comparator(){
+@Override public int compare(PColumn c1, PColumn c2) {
+return 
c1.getName().getString().compareTo(c2.getName().getString());
+}
+});
+
 try {
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaName, tableName, null);
 while (rs.next()) {
 String tenantId = rs.getString(TABLE_CAT);
-String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
-boolean isSalted = table.getBucketNum() != null;
-boolean tenantColSkipped = false;
-List pkColumns = table.getPKColumns();
-List sorderPkColumns =
-Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
-// sort the columns by name
-Collections.sort(sorderPkColumns, new Comparator(){
-@Override public int compare(PColumn c1, PColumn c2) {
-return 
c1.getName().getString().compareTo(c2.getName().getString());
-}
-  

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5121 Move unnecessary sorting and fetching out of loop

2019-02-06 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new ca9140c  PHOENIX-5121 Move unnecessary sorting and fetching out of loop
ca9140c is described below

commit ca9140c1b4f10f5162deada17ca654cc8a25c9fc
Author: Aman Poonia 
AuthorDate: Tue Feb 5 05:53:26 2019 +0530

PHOENIX-5121 Move unnecessary sorting and fetching out of loop
---
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java  | 56 +++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 61ba0fc..f747b90 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -721,23 +721,23 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 try {
 boolean isTenantSpecificConnection = connection.getTenantId() != null;
 List tuples = Lists.newArrayListWithExpectedSize(10);
+// Allow a "." in columnNamePattern for column family match
+String colPattern = null;
+String cfPattern = null;
+if (columnNamePattern != null && columnNamePattern.length() > 0) {
+int index = columnNamePattern.indexOf('.');
+if (index <= 0) {
+colPattern = columnNamePattern;
+} else {
+cfPattern = columnNamePattern.substring(0, index);
+if (columnNamePattern.length() > index+1) {
+colPattern = columnNamePattern.substring(index+1);
+}
+}
+}
 ResultSet rs = getTables(catalog, schemaPattern, tableNamePattern, 
null);
 while (rs.next()) {
 String schemaName = rs.getString(TABLE_SCHEM);
-// Allow a "." in columnNamePattern for column family match
-String colPattern = null;
-String cfPattern = null;
-if (columnNamePattern != null && columnNamePattern.length() > 0) {
-int index = columnNamePattern.indexOf('.');
-if (index <= 0) {
-colPattern = columnNamePattern;
-} else {
-cfPattern = columnNamePattern.substring(0, index);
-if (columnNamePattern.length() > index+1) {
-colPattern = columnNamePattern.substring(index+1);
-}
-}
-}
 String tableName = rs.getString(TABLE_NAME);
 String tenantId = rs.getString(TABLE_CAT);
 String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
@@ -1148,25 +1148,25 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 if (tableName == null || tableName.length() == 0) {
 return emptyResultSet;
 }
+String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
+boolean isSalted = table.getBucketNum() != null;
+boolean tenantColSkipped = false;
+List pkColumns = table.getPKColumns();
+List sorderPkColumns =
+Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
+// sort the columns by name
+Collections.sort(sorderPkColumns, new Comparator(){
+@Override public int compare(PColumn c1, PColumn c2) {
+return 
c1.getName().getString().compareTo(c2.getName().getString());
+}
+});
+
 try {
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaName, tableName, null);
 while (rs.next()) {
 String tenantId = rs.getString(TABLE_CAT);
-String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
-boolean isSalted = table.getBucketNum() != null;
-boolean tenantColSkipped = false;
-List pkColumns = table.getPKColumns();
-List sorderPkColumns =
-Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
-// sort the columns by name
-Collections.sort(sorderPkColumns, new Comparator(){
-@Override public int compare(PColumn c1, PColumn c2) {
-return 
c1.getName().getString().compareTo(c2.getName().getString());
-}
-  

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5121 Move unnecessary sorting and fetching out of loop

2019-02-06 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 6c4edee  PHOENIX-5121 Move unnecessary sorting and fetching out of loop
6c4edee is described below

commit 6c4edeec27365f7ff99af41cd809b0651ec54864
Author: Aman Poonia 
AuthorDate: Tue Feb 5 05:53:26 2019 +0530

PHOENIX-5121 Move unnecessary sorting and fetching out of loop
---
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java  | 56 +++---
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
index 61ba0fc..f747b90 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixDatabaseMetaData.java
@@ -721,23 +721,23 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 try {
 boolean isTenantSpecificConnection = connection.getTenantId() != null;
 List tuples = Lists.newArrayListWithExpectedSize(10);
+// Allow a "." in columnNamePattern for column family match
+String colPattern = null;
+String cfPattern = null;
+if (columnNamePattern != null && columnNamePattern.length() > 0) {
+int index = columnNamePattern.indexOf('.');
+if (index <= 0) {
+colPattern = columnNamePattern;
+} else {
+cfPattern = columnNamePattern.substring(0, index);
+if (columnNamePattern.length() > index+1) {
+colPattern = columnNamePattern.substring(index+1);
+}
+}
+}
 ResultSet rs = getTables(catalog, schemaPattern, tableNamePattern, 
null);
 while (rs.next()) {
 String schemaName = rs.getString(TABLE_SCHEM);
-// Allow a "." in columnNamePattern for column family match
-String colPattern = null;
-String cfPattern = null;
-if (columnNamePattern != null && columnNamePattern.length() > 0) {
-int index = columnNamePattern.indexOf('.');
-if (index <= 0) {
-colPattern = columnNamePattern;
-} else {
-cfPattern = columnNamePattern.substring(0, index);
-if (columnNamePattern.length() > index+1) {
-colPattern = columnNamePattern.substring(index+1);
-}
-}
-}
 String tableName = rs.getString(TABLE_NAME);
 String tenantId = rs.getString(TABLE_CAT);
 String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
@@ -1148,25 +1148,25 @@ public class PhoenixDatabaseMetaData implements 
DatabaseMetaData {
 if (tableName == null || tableName.length() == 0) {
 return emptyResultSet;
 }
+String fullTableName = SchemaUtil.getTableName(schemaName, tableName);
+PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
+boolean isSalted = table.getBucketNum() != null;
+boolean tenantColSkipped = false;
+List pkColumns = table.getPKColumns();
+List sorderPkColumns =
+Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
+// sort the columns by name
+Collections.sort(sorderPkColumns, new Comparator(){
+@Override public int compare(PColumn c1, PColumn c2) {
+return 
c1.getName().getString().compareTo(c2.getName().getString());
+}
+});
+
 try {
 List tuples = Lists.newArrayListWithExpectedSize(10);
 ResultSet rs = getTables(catalog, schemaName, tableName, null);
 while (rs.next()) {
 String tenantId = rs.getString(TABLE_CAT);
-String fullTableName = SchemaUtil.getTableName(schemaName, 
tableName);
-PTable table = PhoenixRuntime.getTableNoCache(connection, 
fullTableName);
-boolean isSalted = table.getBucketNum() != null;
-boolean tenantColSkipped = false;
-List pkColumns = table.getPKColumns();
-List sorderPkColumns =
-Lists.newArrayList(pkColumns.subList(isSalted ? 1 : 0, 
pkColumns.size()));
-// sort the columns by name
-Collections.sort(sorderPkColumns, new Comparator(){
-@Override public int compare(PColumn c1, PColumn c2) {
-return 
c1.getName().getString().compareTo(c2.getName().getString());
-}
-  

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5125 Some tests fail after PHOENIX-4009

2019-02-07 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 7def85e  PHOENIX-5125 Some tests fail after PHOENIX-4009
7def85e is described below

commit 7def85e469d514e0d2d80d168e3ced486fbede69
Author: Karan Mehta 
AuthorDate: Wed Feb 6 05:28:53 2019 -0800

PHOENIX-5125 Some tests fail after PHOENIX-4009
---
 .../java/org/apache/phoenix/end2end/SpillableGroupByIT.java   | 11 ++-
 .../org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java |  4 
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3ed09c6..340760b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -38,6 +38,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -174,7 +175,15 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 stmt.execute("UPSERT INTO " + tableName + " VALUES (2, 'NAME2')");
 stmt.execute("UPSERT INTO " + tableName + " VALUES (3, 'NAME3')");
 conn.commit();
-stmt.execute("UPDATE STATISTICS " + tableName);
+try {
+stmt.execute("UPDATE STATISTICS " + tableName);
+Assert.fail("Update Statistics SQL should have failed");
+} catch (SQLException e) {
+Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
+1401, e.getErrorCode());
+Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
+"STS01", e.getSQLState());
+}
 ResultSet rs = stmt.executeQuery("SELECT * FROM \"SYSTEM\".STATS");
 assertFalse(rs.next());
 rs.close();
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
index 04f4143..87f58d7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.schema.stats;
 import com.google.common.collect.Maps;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -30,6 +31,7 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import java.sql.Array;
 import java.sql.Connection;
@@ -47,6 +49,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
  * Tests the behavior of stats collection code when stats are disabled on 
server side
  * explicitly using QueryServices#STATS_COLLECTION_ENABLED property
  */
+@Category(NeedsOwnMiniClusterTest.class)
 public class NoOpStatsCollectorIT extends ParallelStatsDisabledIT {
 
 private static final Log LOG = 
LogFactory.getLog(NoOpStatsCollectorIT.class);
@@ -90,6 +93,7 @@ public class NoOpStatsCollectorIT extends 
ParallelStatsDisabledIT {
 Statement stmt = conn.createStatement();
 try {
 stmt.execute(updateStatisticsSql);
+Assert.fail("Update Statistics SQL should have failed");
 } catch (SQLException e) {
 Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
 1401, e.getErrorCode());



[phoenix] branch master updated: PHOENIX-5125 Some tests fail after PHOENIX-4009

2019-02-07 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 08c4496  PHOENIX-5125 Some tests fail after PHOENIX-4009
08c4496 is described below

commit 08c4496addd6417f6d3c3965fd3114f791e9fd44
Author: Karan Mehta 
AuthorDate: Wed Feb 6 05:28:53 2019 -0800

PHOENIX-5125 Some tests fail after PHOENIX-4009
---
 .../java/org/apache/phoenix/end2end/SpillableGroupByIT.java   | 11 ++-
 .../org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java |  4 
 2 files changed, 14 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
index 3ed09c6..340760b 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SpillableGroupByIT.java
@@ -38,6 +38,7 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.Assert;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
@@ -174,7 +175,15 @@ public class SpillableGroupByIT extends BaseOwnClusterIT {
 stmt.execute("UPSERT INTO " + tableName + " VALUES (2, 'NAME2')");
 stmt.execute("UPSERT INTO " + tableName + " VALUES (3, 'NAME3')");
 conn.commit();
-stmt.execute("UPDATE STATISTICS " + tableName);
+try {
+stmt.execute("UPDATE STATISTICS " + tableName);
+Assert.fail("Update Statistics SQL should have failed");
+} catch (SQLException e) {
+Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
+1401, e.getErrorCode());
+Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
+"STS01", e.getSQLState());
+}
 ResultSet rs = stmt.executeQuery("SELECT * FROM \"SYSTEM\".STATS");
 assertFalse(rs.next());
 rs.close();
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
index 04f4143..87f58d7 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/schema/stats/NoOpStatsCollectorIT.java
@@ -20,6 +20,7 @@ package org.apache.phoenix.schema.stats;
 import com.google.common.collect.Maps;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
@@ -30,6 +31,7 @@ import org.junit.Assert;
 import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 import java.sql.Array;
 import java.sql.Connection;
@@ -47,6 +49,7 @@ import static 
org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
  * Tests the behavior of stats collection code when stats are disabled on 
server side
  * explicitly using QueryServices#STATS_COLLECTION_ENABLED property
  */
+@Category(NeedsOwnMiniClusterTest.class)
 public class NoOpStatsCollectorIT extends ParallelStatsDisabledIT {
 
 private static final Log LOG = 
LogFactory.getLog(NoOpStatsCollectorIT.class);
@@ -90,6 +93,7 @@ public class NoOpStatsCollectorIT extends 
ParallelStatsDisabledIT {
 Statement stmt = conn.createStatement();
 try {
 stmt.execute(updateStatisticsSql);
+Assert.fail("Update Statistics SQL should have failed");
 } catch (SQLException e) {
 Assert.assertEquals("StatsCollectionDisabledOnServerException 
expected",
 1401, e.getErrorCode());



[phoenix] branch master updated: PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-02-12 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 982b09a  PHOENIX-5069 Use asynchronous refresh to provide non-blocking 
Phoenix Stats Client Cache
982b09a is described below

commit 982b09adb43e8f837b5e6a2cd23921ec08e33065
Author: Bin 
AuthorDate: Sun Oct 28 12:21:27 2018 -0700

PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats 
Client Cache

Signed-off-by: Bin 
---
 .../org/apache/phoenix/query/GuidePostsCache.java  |  86 
 .../phoenix/query/PhoenixStatsCacheLoader.java |  90 
 .../apache/phoenix/query/PhoenixStatsLoader.java   |  57 
 .../org/apache/phoenix/query/QueryServices.java|   4 +-
 .../apache/phoenix/query/QueryServicesOptions.java |   7 +
 .../phoenix/schema/stats/GuidePostsInfo.java   |   6 +-
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java | 156 +
 7 files changed, 377 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
index 26e40f6..b24a1e3 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
@@ -22,11 +22,11 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_STATS_COLLEC
 import java.io.IOException;
 import java.util.List;
 import java.util.Objects;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HConstants;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.TableNotFoundException;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.client.TableDescriptor;
@@ -42,7 +42,6 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
 import com.google.common.cache.RemovalCause;
 import com.google.common.cache.RemovalListener;
@@ -61,18 +60,25 @@ public class GuidePostsCache {
 
 public GuidePostsCache(ConnectionQueryServices queryServices, 
Configuration config) {
 this.queryServices = Objects.requireNonNull(queryServices);
+
 // Number of millis to expire cache values after write
 final long statsUpdateFrequency = config.getLong(
 QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB,
 QueryServicesOptions.DEFAULT_STATS_UPDATE_FREQ_MS);
-// Maximum number of entries (tables) to store in the cache at one time
+
+// Maximum total weight (size in bytes) of stats entries
 final long maxTableStatsCacheSize = config.getLong(
 QueryServices.STATS_MAX_CACHE_SIZE,
 QueryServicesOptions.DEFAULT_STATS_MAX_CACHE_SIZE);
+
final boolean isStatsEnabled = 
config.getBoolean(STATS_COLLECTION_ENABLED, DEFAULT_STATS_COLLECTION_ENABLED);
+
+PhoenixStatsCacheLoader cacheLoader = new PhoenixStatsCacheLoader(
+isStatsEnabled ? new StatsLoaderImpl() : new 
EmptyStatsLoader(), config);
+
 cache = CacheBuilder.newBuilder()
-// Expire entries a given amount of time after they were 
written
-.expireAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
+// Refresh entries a given amount of time after they were 
written
+.refreshAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
 // Maximum total weight (size in bytes) of stats entries
 .maximumWeight(maxTableStatsCacheSize)
 // Defer actual size to the PTableStats.getEstimatedSize()
@@ -83,19 +89,38 @@ public class GuidePostsCache {
 })
 // Log removals at TRACE for debugging
 .removalListener(new PhoenixStatsCacheRemovalListener())
-// Automatically load the cache when entries are missing
-.build(isStatsEnabled ? new StatsLoader() : new 
EmptyStatsLoader());
+// Automatically load the cache when entries need to be 
refreshed
+.build(cacheLoader);
 }
 
 /**
- * {@link CacheLoader} implementation for the Phoenix Table Stats cache.
+ * {@link PhoenixStatsLoader} implementation for the Stats Loader.
  */
-protected class StatsLoader extends CacheLoader {
+protected class StatsLoaderImpl implements PhoenixStatsLoader {
+@Override
+public boolean needsLoad() {
+// For now, whenever it's called, we try to load stats from 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-02-12 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new ffbfc80  PHOENIX-5069 Use asynchronous refresh to provide non-blocking 
Phoenix Stats Client Cache
ffbfc80 is described below

commit ffbfc8012027fa399bdba9ee4ab1074fc30229cb
Author: Bin 
AuthorDate: Tue Feb 5 15:37:21 2019 -0800

PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats 
Client Cache
---
 .../org/apache/phoenix/query/GuidePostsCache.java  |  86 
 .../phoenix/query/PhoenixStatsCacheLoader.java |  90 
 .../apache/phoenix/query/PhoenixStatsLoader.java   |  57 
 .../org/apache/phoenix/query/QueryServices.java|   4 +-
 .../apache/phoenix/query/QueryServicesOptions.java |   7 +
 .../phoenix/schema/stats/GuidePostsInfo.java   |   6 +-
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java | 156 +
 7 files changed, 378 insertions(+), 28 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
index 066dd5f..2c2697a 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
@@ -22,12 +22,12 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_STATS_COLLEC
 import java.io.IOException;
 import java.util.List;
 import java.util.Objects;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.TableNotFoundException;
 import org.apache.hadoop.hbase.client.Table;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -42,13 +42,13 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
 import com.google.common.cache.RemovalCause;
 import com.google.common.cache.RemovalListener;
 import com.google.common.cache.RemovalNotification;
 import com.google.common.cache.Weigher;
 
+
 /**
  * "Client-side" cache for storing {@link GuidePostsInfo} for a column family. 
Intended to decouple
  * Phoenix from a specific version of Guava's cache.
@@ -58,21 +58,29 @@ public class GuidePostsCache {
 
 private final ConnectionQueryServices queryServices;
 private final LoadingCache cache;
+private ExecutorService executor = null;
 
 public GuidePostsCache(ConnectionQueryServices queryServices, 
Configuration config) {
 this.queryServices = Objects.requireNonNull(queryServices);
+
 // Number of millis to expire cache values after write
 final long statsUpdateFrequency = config.getLong(
 QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB,
 QueryServicesOptions.DEFAULT_STATS_UPDATE_FREQ_MS);
-// Maximum number of entries (tables) to store in the cache at one time
+
+// Maximum total weight (size in bytes) of stats entries
 final long maxTableStatsCacheSize = config.getLong(
 QueryServices.STATS_MAX_CACHE_SIZE,
 QueryServicesOptions.DEFAULT_STATS_MAX_CACHE_SIZE);
+
final boolean isStatsEnabled = 
config.getBoolean(STATS_COLLECTION_ENABLED, DEFAULT_STATS_COLLECTION_ENABLED);
+
+PhoenixStatsCacheLoader cacheLoader = new PhoenixStatsCacheLoader(
+isStatsEnabled ? new StatsLoaderImpl() : new 
EmptyStatsLoader(), config);
+
 cache = CacheBuilder.newBuilder()
-// Expire entries a given amount of time after they were 
written
-.expireAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
+// Refresh entries a given amount of time after they were 
written
+.refreshAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
 // Maximum total weight (size in bytes) of stats entries
 .maximumWeight(maxTableStatsCacheSize)
 // Defer actual size to the PTableStats.getEstimatedSize()
@@ -84,18 +92,37 @@ public class GuidePostsCache {
 // Log removals at TRACE for debugging
 .removalListener(new PhoenixStatsCacheRemovalListener())
 // Automatically load the cache when entries are missing
-.build(isStatsEnabled ? new StatsLoader() : new 
EmptyStatsLoader());
+.build(cacheLoader);
 }
 
 /**
- * {@link CacheLoader} implementation for the Phoenix Table Stats cache.

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-02-13 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new ab4be43  PHOENIX-5069 Use asynchronous refresh to provide non-blocking 
Phoenix Stats Client Cache
ab4be43 is described below

commit ab4be43a19991b1054e218cefbf688804851dbc5
Author: Bin 
AuthorDate: Wed Feb 13 15:30:28 2019 -0800

PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats 
Client Cache
---
 .../org/apache/phoenix/query/GuidePostsCache.java  |  87 
 .../phoenix/query/PhoenixStatsCacheLoader.java |  90 
 .../apache/phoenix/query/PhoenixStatsLoader.java   |  57 
 .../org/apache/phoenix/query/QueryServices.java|   4 +-
 .../apache/phoenix/query/QueryServicesOptions.java |   7 +
 .../phoenix/schema/stats/GuidePostsInfo.java   |   6 +-
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java | 156 +
 7 files changed, 378 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
index 1d9fa36..436634c 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
@@ -23,12 +23,12 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_STATS_COLLEC
 import java.io.IOException;
 import java.util.List;
 import java.util.Objects;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.TableNotFoundException;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -43,13 +43,13 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
 import com.google.common.cache.RemovalCause;
 import com.google.common.cache.RemovalListener;
 import com.google.common.cache.RemovalNotification;
 import com.google.common.cache.Weigher;
 
+
 /**
  * "Client-side" cache for storing {@link GuidePostsInfo} for a column family. 
Intended to decouple
  * Phoenix from a specific version of Guava's cache.
@@ -62,19 +62,26 @@ public class GuidePostsCache {
 
 public GuidePostsCache(ConnectionQueryServices queryServices, 
Configuration config) {
 this.queryServices = Objects.requireNonNull(queryServices);
+
 // Number of millis to expire cache values after write
 final long statsUpdateFrequency = config.getLong(
 QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB,
 QueryServicesOptions.DEFAULT_STATS_UPDATE_FREQ_MS);
-// Maximum number of entries (tables) to store in the cache at one time
+
+// Maximum total weight (size in bytes) of stats entries
 final long maxTableStatsCacheSize = config.getLong(
 QueryServices.STATS_MAX_CACHE_SIZE,
 QueryServicesOptions.DEFAULT_STATS_MAX_CACHE_SIZE);
+
final boolean isStatsEnabled = 
config.getBoolean(STATS_COLLECTION_ENABLED, DEFAULT_STATS_COLLECTION_ENABLED)
&& config.getBoolean(STATS_ENABLED_ATTRIB, 
true);
+
+PhoenixStatsCacheLoader cacheLoader = new PhoenixStatsCacheLoader(
+isStatsEnabled ? new StatsLoaderImpl() : new 
EmptyStatsLoader(), config);
+
 cache = CacheBuilder.newBuilder()
-// Expire entries a given amount of time after they were 
written
-.expireAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
+// Refresh entries a given amount of time after they were 
written
+.refreshAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
 // Maximum total weight (size in bytes) of stats entries
 .maximumWeight(maxTableStatsCacheSize)
 // Defer actual size to the PTableStats.getEstimatedSize()
@@ -86,19 +93,38 @@ public class GuidePostsCache {
 // Log removals at TRACE for debugging
 .removalListener(new PhoenixStatsCacheRemovalListener())
 // Automatically load the cache when entries are missing
-.build(isStatsEnabled ? new StatsLoader() : new 
EmptyStatsLoader());
+.build(cacheLoader);
 }
 
 /**
- * {@link CacheLoader} implementation for the Phoenix Table Stats cache.
+ * {@link PhoenixStatsLoader} implementati

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats Client Cache

2019-02-13 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 829cfa2  PHOENIX-5069 Use asynchronous refresh to provide non-blocking 
Phoenix Stats Client Cache
829cfa2 is described below

commit 829cfa25fefeb3874d7573921344b21289588ab2
Author: Bin 
AuthorDate: Wed Feb 13 15:30:28 2019 -0800

PHOENIX-5069 Use asynchronous refresh to provide non-blocking Phoenix Stats 
Client Cache
---
 .../org/apache/phoenix/query/GuidePostsCache.java  |  87 
 .../phoenix/query/PhoenixStatsCacheLoader.java |  90 
 .../apache/phoenix/query/PhoenixStatsLoader.java   |  57 
 .../org/apache/phoenix/query/QueryServices.java|   4 +-
 .../apache/phoenix/query/QueryServicesOptions.java |   7 +
 .../phoenix/schema/stats/GuidePostsInfo.java   |   6 +-
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java | 156 +
 7 files changed, 378 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
index 1d9fa36..436634c 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/GuidePostsCache.java
@@ -23,12 +23,12 @@ import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_STATS_COLLEC
 import java.io.IOException;
 import java.util.List;
 import java.util.Objects;
-import java.util.concurrent.ExecutionException;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.TableNotFoundException;
 import org.apache.hadoop.hbase.client.HTableInterface;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -43,13 +43,13 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import com.google.common.cache.CacheBuilder;
-import com.google.common.cache.CacheLoader;
 import com.google.common.cache.LoadingCache;
 import com.google.common.cache.RemovalCause;
 import com.google.common.cache.RemovalListener;
 import com.google.common.cache.RemovalNotification;
 import com.google.common.cache.Weigher;
 
+
 /**
  * "Client-side" cache for storing {@link GuidePostsInfo} for a column family. 
Intended to decouple
  * Phoenix from a specific version of Guava's cache.
@@ -62,19 +62,26 @@ public class GuidePostsCache {
 
 public GuidePostsCache(ConnectionQueryServices queryServices, 
Configuration config) {
 this.queryServices = Objects.requireNonNull(queryServices);
+
 // Number of millis to expire cache values after write
 final long statsUpdateFrequency = config.getLong(
 QueryServices.STATS_UPDATE_FREQ_MS_ATTRIB,
 QueryServicesOptions.DEFAULT_STATS_UPDATE_FREQ_MS);
-// Maximum number of entries (tables) to store in the cache at one time
+
+// Maximum total weight (size in bytes) of stats entries
 final long maxTableStatsCacheSize = config.getLong(
 QueryServices.STATS_MAX_CACHE_SIZE,
 QueryServicesOptions.DEFAULT_STATS_MAX_CACHE_SIZE);
+
final boolean isStatsEnabled = 
config.getBoolean(STATS_COLLECTION_ENABLED, DEFAULT_STATS_COLLECTION_ENABLED)
&& config.getBoolean(STATS_ENABLED_ATTRIB, 
true);
+
+PhoenixStatsCacheLoader cacheLoader = new PhoenixStatsCacheLoader(
+isStatsEnabled ? new StatsLoaderImpl() : new 
EmptyStatsLoader(), config);
+
 cache = CacheBuilder.newBuilder()
-// Expire entries a given amount of time after they were 
written
-.expireAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
+// Refresh entries a given amount of time after they were 
written
+.refreshAfterWrite(statsUpdateFrequency, TimeUnit.MILLISECONDS)
 // Maximum total weight (size in bytes) of stats entries
 .maximumWeight(maxTableStatsCacheSize)
 // Defer actual size to the PTableStats.getEstimatedSize()
@@ -86,19 +93,38 @@ public class GuidePostsCache {
 // Log removals at TRACE for debugging
 .removalListener(new PhoenixStatsCacheRemovalListener())
 // Automatically load the cache when entries are missing
-.build(isStatsEnabled ? new StatsLoader() : new 
EmptyStatsLoader());
+.build(cacheLoader);
 }
 
 /**
- * {@link CacheLoader} implementation for the Phoenix Table Stats cache.
+ * {@link PhoenixStatsLoader} implementati

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5063 Create a new repo for the phoenix query server (#454)

2019-03-05 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new e6b22c0  PHOENIX-5063 Create a new repo for the phoenix query server 
(#454)
e6b22c0 is described below

commit e6b22c040de6d6271b8e6d610d1470cccd9de555
Author: karanmehta93 
AuthorDate: Tue Mar 5 17:17:59 2019 -0800

PHOENIX-5063 Create a new repo for the phoenix query server (#454)

Removed phoenix-load-balancer module
---
 phoenix-assembly/pom.xml   |  12 --
 phoenix-load-balancer/pom.xml  |  85 --
 .../phoenix/end2end/LoadBalancerEnd2EndIT.java | 144 -
 .../service/LoadBalanceZookeeperConfImpl.java  | 103 
 .../phoenix/loadbalancer/service/LoadBalancer.java | 178 -
 .../queryserver/register/ZookeeperRegistry.java|  72 -
 ...x.loadbalancer.service.LoadBalanceZookeeperConf |   1 -
 ...rg.apache.phoenix.queryserver.register.Registry |   1 -
 pom.xml|   6 -
 9 files changed, 602 deletions(-)

diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index b9d7e99..b8c09ed 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -119,18 +119,6 @@
   phoenix-spark
 
 
-  org.apache.phoenix
-  phoenix-queryserver
-
-
-  org.apache.phoenix
-  phoenix-queryserver-client
-
-
-  org.apache.phoenix
-  phoenix-load-balancer
-
-
   org.apache.omid
   omid-hbase-tools-hbase1.x
   ${omid.version}
diff --git a/phoenix-load-balancer/pom.xml b/phoenix-load-balancer/pom.xml
deleted file mode 100644
index 51884e7..000
--- a/phoenix-load-balancer/pom.xml
+++ /dev/null
@@ -1,85 +0,0 @@
-
-
-
-http://maven.apache.org/POM/4.0.0";
- xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
-  4.0.0
-  
-org.apache.phoenix
-phoenix
-4.15.0-HBase-1.4-SNAPSHOT
-  
-  phoenix-load-balancer
-  Phoenix Load Balancer
-  A Load balancer which routes calls to Phoenix Query 
Server
-
-  
-
-  org.apache.hbase
-  hbase-common
-
-
-  org.apache.curator
-  curator-client
-
-
-  org.apache.phoenix
-  phoenix-queryserver
-
-
-
-  org.apache.curator
-  curator-test
-  test
-
-  
-
-  
-
-  
-maven-source-plugin
-
-  
-attach-sources
-verify
-
-  jar-no-fork
-  test-jar-no-fork
-
-  
-
-  
-  
-org.apache.rat
-apache-rat-plugin
-
-  
-
src/main/resources/META-INF/services/org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf
-
src/main/resources/META-INF/services/org.apache.phoenix.queryserver.register.Registry
-  
-
-  
-
-  
-
-
diff --git 
a/phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
 
b/phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
deleted file mode 100644
index a5e2c9b..000
--- 
a/phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
+++ /dev/null
@@ -1,144 +0,0 @@
-/**
- *
- * Licensed to the Apache Software Foundation (ASF) under one
- * or more contributor license agreements.  See the NOTICE file
- * distributed with this work for additional information
- * regarding copyright ownership.  The ASF licenses this file
- * to you under the Apache License, Version 2.0 (the
- * "License"); you may not use this file except in compliance
- * with the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-package org.apache.phoenix.end2end;
-
-import com.google.common.net.HostAndPort;
-import org.apache.commons.logging.Log;
-import org.apache.commons.logging.LogFactory;
-import org.apache.curator.CuratorZookeeperClient;
-import org.apache.curator.framework.CuratorFramework;
-import org.apache.curator.framework.CuratorFrameworkFactory;
-import org.apache.curator.retry.ExponentialBackoffRetry;
-import org.apache.curator.TestingServer;
-import org.apache.curator.utils.CloseableUtils;
-import org.apache.ph

[phoenix-queryserver] branch master updated: PHOENIX-5063 Create a new repo for the phoenix query server (#3)

2019-03-05 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git


The following commit(s) were added to refs/heads/master by this push:
 new 8068436  PHOENIX-5063 Create a new repo for the phoenix query server 
(#3)
8068436 is described below

commit 8068436b9b9c89f2b15736492b582604ac1fb80d
Author: karanmehta93 
AuthorDate: Tue Mar 5 17:18:10 2019 -0800

PHOENIX-5063 Create a new repo for the phoenix query server (#3)

Added load-balancer module
---
 load-balancer/pom.xml  |  85 ++
 .../phoenix/end2end/LoadBalancerEnd2EndIT.java | 144 +
 .../service/LoadBalanceZookeeperConfImpl.java  | 103 
 .../phoenix/loadbalancer/service/LoadBalancer.java | 178 +
 .../queryserver/register/ZookeeperRegistry.java|  72 +
 ...x.loadbalancer.service.LoadBalanceZookeeperConf |   1 +
 ...rg.apache.phoenix.queryserver.register.Registry |   1 +
 pom.xml|  23 +++
 8 files changed, 607 insertions(+)

diff --git a/load-balancer/pom.xml b/load-balancer/pom.xml
new file mode 100644
index 000..cb893f7
--- /dev/null
+++ b/load-balancer/pom.xml
@@ -0,0 +1,85 @@
+
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+  4.0.0
+  
+org.apache.phoenix
+phoenix-queryserver
+1.0.0-SNAPSHOT
+  
+  load-balancer
+  Phoenix Load Balancer
+  A Load balancer which routes calls to Phoenix Query 
Server
+
+  
+
+  org.apache.hbase
+  hbase-common
+
+
+  org.apache.curator
+  curator-client
+
+
+  org.apache.phoenix
+  queryserver
+
+
+
+  org.apache.curator
+  curator-test
+  test
+
+  
+
+  
+
+  
+maven-source-plugin
+
+  
+attach-sources
+verify
+
+  jar-no-fork
+  test-jar-no-fork
+
+  
+
+  
+  
+org.apache.rat
+apache-rat-plugin
+
+  
+
src/main/resources/META-INF/services/org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf
+
src/main/resources/META-INF/services/org.apache.phoenix.queryserver.register.Registry
+  
+
+  
+
+  
+
+
diff --git 
a/load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
 
b/load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
new file mode 100644
index 000..a5e2c9b
--- /dev/null
+++ 
b/load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
@@ -0,0 +1,144 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import com.google.common.net.HostAndPort;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.curator.CuratorZookeeperClient;
+import org.apache.curator.framework.CuratorFramework;
+import org.apache.curator.framework.CuratorFrameworkFactory;
+import org.apache.curator.retry.ExponentialBackoffRetry;
+import org.apache.curator.TestingServer;
+import org.apache.curator.utils.CloseableUtils;
+import org.apache.phoenix.loadbalancer.service.LoadBalancer;
+import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf;
+import org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConfImpl;
+import org.apache.phoenix.queryserver.register.Registry;
+import org.apache.phoenix.queryserver.register.ZookeeperRegistry;
+import org.apache.zookeeper.KeeperException;
+import org.junit.*;
+
+import java.util.Arrays;
+import java.util.List;
+
+public class LoadBalancerEnd2EndIT {
+private static TestingServer testingServer;
+private static CuratorFramework curatorFramework;
+private static final Log LOG = 
LogFactory.getLog(LoadBalancerEnd2EndIT.class);
+ 

[phoenix] branch 4.x-HBase-1.3 updated (285118a -> 881669b)

2019-03-05 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 285118a  PHOENIX-4345 Error message for incorrect index is not accurate
 new 9ccaa80  PHOENIX-5063 Create a new repo for the phoenix query server 
(#422)
 new 881669b  PHOENIX-5063 Create a new repo for the phoenix query server 
(#454)

The 2682 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 phoenix-assembly/pom.xml   |  12 -
 phoenix-load-balancer/pom.xml  |  85 ---
 .../phoenix/end2end/LoadBalancerEnd2EndIT.java | 144 -
 .../service/LoadBalanceZookeeperConfImpl.java  | 103 
 .../phoenix/loadbalancer/service/LoadBalancer.java | 178 --
 .../queryserver/register/ZookeeperRegistry.java|  72 ---
 ...x.loadbalancer.service.LoadBalanceZookeeperConf |   1 -
 ...rg.apache.phoenix.queryserver.register.Registry |   1 -
 phoenix-queryserver-client/pom.xml | 203 ---
 .../apache/phoenix/queryserver/client/Driver.java  |  49 --
 .../phoenix/queryserver/client/SqllineWrapper.java |  97 
 .../phoenix/queryserver/client/ThinClientUtil.java |  42 --
 .../resources/META-INF/services/java.sql.Driver|   1 -
 .../org-apache-phoenix-remote-jdbc.properties  |  25 -
 phoenix-queryserver/pom.xml| 194 ---
 .../src/build/query-server-runnable.xml|  52 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.py   |  39 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.sh   |  79 ---
 .../HttpParamImpersonationQueryServerIT.java   | 438 ---
 .../phoenix/end2end/QueryServerBasicsIT.java   | 346 
 .../phoenix/end2end/QueryServerTestUtil.java   | 187 ---
 .../apache/phoenix/end2end/QueryServerThread.java  |  45 --
 .../phoenix/end2end/SecureQueryServerIT.java   | 323 ---
 .../end2end/SecureQueryServerPhoenixDBIT.java  | 424 --
 .../phoenix/end2end/ServerCustomizersIT.java   | 149 -
 .../src/it/resources/log4j.properties  |  68 ---
 .../service/LoadBalanceZookeeperConf.java  |  42 --
 .../phoenix/queryserver/register/Registry.java |  48 --
 .../server/AvaticaServerConfigurationFactory.java  |  37 --
 .../queryserver/server/PhoenixMetaFactory.java |  28 -
 .../queryserver/server/PhoenixMetaFactoryImpl.java |  76 ---
 .../phoenix/queryserver/server/QueryServer.java| 606 -
 .../server/RemoteUserExtractorFactory.java |  36 --
 .../server/ServerCustomizersFactory.java   |  52 --
 .../org/apache/phoenix/DriverCohabitationTest.java |  65 ---
 .../CustomAvaticaServerConfigurationTest.java  |  37 --
 .../server/PhoenixDoAsCallbackTest.java|  89 ---
 .../server/PhoenixRemoteUserExtractorTest.java | 108 
 .../server/QueryServerConfigurationTest.java   |  92 
 .../server/RemoteUserExtractorFactoryTest.java |  35 --
 .../queryserver/server/ServerCustomizersTest.java  |  92 
 pom.xml|   8 -
 42 files changed, 4808 deletions(-)
 delete mode 100644 phoenix-load-balancer/pom.xml
 delete mode 100644 
phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalanceZookeeperConfImpl.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalancer.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/queryserver/register/ZookeeperRegistry.java
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.queryserver.register.Registry
 delete mode 100644 phoenix-queryserver-client/pom.xml
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/SqllineWrapper.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
 delete mode 100644 phoenix-queryserver/pom.xml
 delete mode 100644 phoenix-queryserver/src/build/query-server-runnable.xml
 delete mode 100644 phoenix-qu

[phoenix] branch 4.x-HBase-1.2 updated (a979346 -> c5f769e)

2019-03-05 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from a979346  PHOENIX-4345 Error message for incorrect index is not accurate
 new d93c42d  PHOENIX-5063 Create a new repo for the phoenix query server 
(#422)
 new c5f769e  PHOENIX-5063 Create a new repo for the phoenix query server 
(#454)

The 2674 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 phoenix-assembly/pom.xml   |  12 -
 phoenix-load-balancer/pom.xml  |  85 ---
 .../phoenix/end2end/LoadBalancerEnd2EndIT.java | 144 -
 .../service/LoadBalanceZookeeperConfImpl.java  | 103 
 .../phoenix/loadbalancer/service/LoadBalancer.java | 178 --
 .../queryserver/register/ZookeeperRegistry.java|  72 ---
 ...x.loadbalancer.service.LoadBalanceZookeeperConf |   1 -
 ...rg.apache.phoenix.queryserver.register.Registry |   1 -
 phoenix-queryserver-client/pom.xml | 203 ---
 .../apache/phoenix/queryserver/client/Driver.java  |  49 --
 .../phoenix/queryserver/client/SqllineWrapper.java |  97 
 .../phoenix/queryserver/client/ThinClientUtil.java |  42 --
 .../resources/META-INF/services/java.sql.Driver|   1 -
 .../org-apache-phoenix-remote-jdbc.properties  |  25 -
 phoenix-queryserver/pom.xml| 194 ---
 .../src/build/query-server-runnable.xml|  52 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.py   |  39 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.sh   |  79 ---
 .../HttpParamImpersonationQueryServerIT.java   | 438 ---
 .../phoenix/end2end/QueryServerBasicsIT.java   | 346 
 .../phoenix/end2end/QueryServerTestUtil.java   | 187 ---
 .../apache/phoenix/end2end/QueryServerThread.java  |  45 --
 .../phoenix/end2end/SecureQueryServerIT.java   | 323 ---
 .../end2end/SecureQueryServerPhoenixDBIT.java  | 424 --
 .../phoenix/end2end/ServerCustomizersIT.java   | 149 -
 .../src/it/resources/log4j.properties  |  68 ---
 .../service/LoadBalanceZookeeperConf.java  |  42 --
 .../phoenix/queryserver/register/Registry.java |  48 --
 .../server/AvaticaServerConfigurationFactory.java  |  37 --
 .../queryserver/server/PhoenixMetaFactory.java |  28 -
 .../queryserver/server/PhoenixMetaFactoryImpl.java |  76 ---
 .../phoenix/queryserver/server/QueryServer.java| 606 -
 .../server/RemoteUserExtractorFactory.java |  36 --
 .../server/ServerCustomizersFactory.java   |  52 --
 .../org/apache/phoenix/DriverCohabitationTest.java |  65 ---
 .../CustomAvaticaServerConfigurationTest.java  |  37 --
 .../server/PhoenixDoAsCallbackTest.java|  89 ---
 .../server/PhoenixRemoteUserExtractorTest.java | 108 
 .../server/QueryServerConfigurationTest.java   |  92 
 .../server/RemoteUserExtractorFactoryTest.java |  35 --
 .../queryserver/server/ServerCustomizersTest.java  |  92 
 pom.xml|   8 -
 42 files changed, 4808 deletions(-)
 delete mode 100644 phoenix-load-balancer/pom.xml
 delete mode 100644 
phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalanceZookeeperConfImpl.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalancer.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/queryserver/register/ZookeeperRegistry.java
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.queryserver.register.Registry
 delete mode 100644 phoenix-queryserver-client/pom.xml
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/SqllineWrapper.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
 delete mode 100644 phoenix-queryserver/pom.xml
 delete mode 100644 phoenix-queryserver/src/build/query-server-runnable.xml
 delete mode 100644 phoenix-qu

[phoenix] branch master updated (6d13f5b -> 222ac02)

2019-03-05 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


from 6d13f5b  PHOENIX-4345 Error message for incorrect index is not accurate
 new d93babd  PHOENIX-5063 Create a new repo for the phoenix query server 
(#422)
 new 222ac02  PHOENIX-5063 Create a new repo for the phoenix query server 
(#454)

The 2759 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 phoenix-assembly/pom.xml   |  12 -
 phoenix-load-balancer/pom.xml  |  86 ---
 .../phoenix/end2end/LoadBalancerEnd2EndIT.java | 144 -
 .../service/LoadBalanceZookeeperConfImpl.java  | 103 
 .../phoenix/loadbalancer/service/LoadBalancer.java | 178 --
 .../queryserver/register/ZookeeperRegistry.java|  72 ---
 ...x.loadbalancer.service.LoadBalanceZookeeperConf |   1 -
 ...rg.apache.phoenix.queryserver.register.Registry |   1 -
 phoenix-queryserver-client/pom.xml | 203 ---
 .../apache/phoenix/queryserver/client/Driver.java  |  49 --
 .../phoenix/queryserver/client/SqllineWrapper.java |  97 
 .../phoenix/queryserver/client/ThinClientUtil.java |  42 --
 .../resources/META-INF/services/java.sql.Driver|   1 -
 .../org-apache-phoenix-remote-jdbc.properties  |  25 -
 phoenix-queryserver/pom.xml| 225 
 .../src/build/query-server-runnable.xml|  52 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.py   |  39 --
 phoenix-queryserver/src/it/bin/test_phoenixdb.sh   |  79 ---
 .../HttpParamImpersonationQueryServerIT.java   | 438 ---
 .../phoenix/end2end/QueryServerBasicsIT.java   | 346 
 .../phoenix/end2end/QueryServerTestUtil.java   | 187 ---
 .../apache/phoenix/end2end/QueryServerThread.java  |  45 --
 .../phoenix/end2end/SecureQueryServerIT.java   | 323 ---
 .../end2end/SecureQueryServerPhoenixDBIT.java  | 424 --
 .../phoenix/end2end/ServerCustomizersIT.java   | 149 -
 .../src/it/resources/log4j.properties  |  68 ---
 .../service/LoadBalanceZookeeperConf.java  |  42 --
 .../phoenix/queryserver/register/Registry.java |  48 --
 .../server/AvaticaServerConfigurationFactory.java  |  37 --
 .../queryserver/server/PhoenixMetaFactory.java |  28 -
 .../queryserver/server/PhoenixMetaFactoryImpl.java |  76 ---
 .../phoenix/queryserver/server/QueryServer.java| 606 -
 .../server/RemoteUserExtractorFactory.java |  36 --
 .../server/ServerCustomizersFactory.java   |  52 --
 .../org/apache/phoenix/DriverCohabitationTest.java |  65 ---
 .../CustomAvaticaServerConfigurationTest.java  |  37 --
 .../server/PhoenixDoAsCallbackTest.java|  89 ---
 .../server/PhoenixRemoteUserExtractorTest.java | 108 
 .../server/QueryServerConfigurationTest.java   |  92 
 .../server/RemoteUserExtractorFactoryTest.java |  35 --
 .../queryserver/server/ServerCustomizersTest.java  |  92 
 pom.xml|   8 -
 42 files changed, 4840 deletions(-)
 delete mode 100644 phoenix-load-balancer/pom.xml
 delete mode 100644 
phoenix-load-balancer/src/it/java/org/apache/phoenix/end2end/LoadBalancerEnd2EndIT.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalanceZookeeperConfImpl.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/loadbalancer/service/LoadBalancer.java
 delete mode 100644 
phoenix-load-balancer/src/main/java/org/apache/phoenix/queryserver/register/ZookeeperRegistry.java
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.loadbalancer.service.LoadBalanceZookeeperConf
 delete mode 100644 
phoenix-load-balancer/src/main/resources/META-INF/services/org.apache.phoenix.queryserver.register.Registry
 delete mode 100644 phoenix-queryserver-client/pom.xml
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/Driver.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/SqllineWrapper.java
 delete mode 100644 
phoenix-queryserver-client/src/main/java/org/apache/phoenix/queryserver/client/ThinClientUtil.java
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/META-INF/services/java.sql.Driver
 delete mode 100644 
phoenix-queryserver-client/src/main/resources/version/org-apache-phoenix-remote-jdbc.properties
 delete mode 100644 phoenix-queryserver/pom.xml
 delete mode 100644 phoenix-queryserver/src/build/query-server-runnable.xml
 delete mode 100644 phoenix-qu

[phoenix] branch phoenix-stats created (now a9e46be)

2019-03-12 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a change to branch phoenix-stats
in repository https://gitbox.apache.org/repos/asf/phoenix.git.


  at a9e46be  PHOENIX-5188 - IndexedKeyValue should populate KeyValue fields

No new revisions were added by this update.



[phoenix] branch master updated: PHOENIX-5185 support Math PI function (#461)

2019-03-14 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c73947a  PHOENIX-5185 support Math PI function (#461)
c73947a is described below

commit c73947a47ddcf7be742625c44fb42e01cf419527
Author: Xinyi Yan 
AuthorDate: Thu Mar 14 10:45:45 2019 -0700

PHOENIX-5185 support Math PI function (#461)
---
 .../phoenix/end2end/LnLogFunctionEnd2EndIT.java| 16 --
 .../phoenix/end2end/MathPIFunctionEnd2EndIT.java   | 61 
 .../phoenix/end2end/PowerFunctionEnd2EndIT.java| 16 --
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../expression/function/MathPIFunction.java| 65 ++
 .../apache/phoenix/expression/ExpFunctionTest.java | 19 +--
 .../phoenix/expression/LnLogFunctionTest.java  | 23 ++--
 .../phoenix/expression/MathPIFunctionTest.java | 44 +++
 .../phoenix/expression/PowerFunctionTest.java  | 22 ++--
 .../phoenix/expression/SqrtFunctionTest.java   | 20 +--
 .../java/org/apache/phoenix/query/BaseTest.java| 17 +-
 11 files changed, 199 insertions(+), 105 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
index ddbe2ad..d3d1b51 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
@@ -36,25 +36,9 @@ import org.junit.Test;
 public class LnLogFunctionEnd2EndIT extends ParallelStatsDisabledIT {
 
 private static final String KEY = "key";
-private static final double ZERO = 1e-9;
 private String signedTableName;
 private String unsignedTableName;
 
-private static boolean twoDoubleEquals(double a, double b) {
-if (Double.isNaN(a) ^ Double.isNaN(b)) return false;
-if (Double.isNaN(a)) return true;
-if (Double.isInfinite(a) ^ Double.isInfinite(b)) return false;
-if (Double.isInfinite(a)) {
-if ((a > 0) ^ (b > 0)) return false;
-else return true;
-}
-if (Math.abs(a - b) <= ZERO) {
-return true;
-} else {
-return false;
-}
-}
-
 @Before
 public void initTable() throws Exception {
 Connection conn = null;
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
new file mode 100644
index 000..9594aec
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.*;
+
+import java.sql.*;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.expression.function.MathPIFunction;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * End to end tests for {@link MathPIFunction}
+ */
+public class MathPIFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testGetMathPIValue() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT PI()");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.PI));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIRoundTwoDecimal() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT ROUND(PI(), 
2)");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), 3.14));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIFunctionWithIncorrectFormat() throws Exception {
+  

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5185 support Math PI function

2019-03-14 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a429f0b  PHOENIX-5185 support Math PI function
a429f0b is described below

commit a429f0b2280cfcf918b19d3edcf961ae812a8eec
Author: Xinyi Yan 
AuthorDate: Thu Mar 14 14:44:54 2019 -0700

PHOENIX-5185 support Math PI function
---
 .../phoenix/end2end/LnLogFunctionEnd2EndIT.java| 16 --
 .../phoenix/end2end/MathPIFunctionEnd2EndIT.java   | 61 
 .../phoenix/end2end/PowerFunctionEnd2EndIT.java| 16 --
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../expression/function/MathPIFunction.java| 65 ++
 .../apache/phoenix/expression/ExpFunctionTest.java | 19 +--
 .../phoenix/expression/LnLogFunctionTest.java  | 23 ++--
 .../phoenix/expression/MathPIFunctionTest.java | 44 +++
 .../phoenix/expression/PowerFunctionTest.java  | 22 ++--
 .../phoenix/expression/SqrtFunctionTest.java   | 20 +--
 .../java/org/apache/phoenix/query/BaseTest.java| 17 +-
 11 files changed, 199 insertions(+), 105 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
index ddbe2ad..d3d1b51 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
@@ -36,25 +36,9 @@ import org.junit.Test;
 public class LnLogFunctionEnd2EndIT extends ParallelStatsDisabledIT {
 
 private static final String KEY = "key";
-private static final double ZERO = 1e-9;
 private String signedTableName;
 private String unsignedTableName;
 
-private static boolean twoDoubleEquals(double a, double b) {
-if (Double.isNaN(a) ^ Double.isNaN(b)) return false;
-if (Double.isNaN(a)) return true;
-if (Double.isInfinite(a) ^ Double.isInfinite(b)) return false;
-if (Double.isInfinite(a)) {
-if ((a > 0) ^ (b > 0)) return false;
-else return true;
-}
-if (Math.abs(a - b) <= ZERO) {
-return true;
-} else {
-return false;
-}
-}
-
 @Before
 public void initTable() throws Exception {
 Connection conn = null;
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
new file mode 100644
index 000..9594aec
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.*;
+
+import java.sql.*;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.expression.function.MathPIFunction;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * End to end tests for {@link MathPIFunction}
+ */
+public class MathPIFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testGetMathPIValue() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT PI()");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.PI));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIRoundTwoDecimal() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT ROUND(PI(), 
2)");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), 3.14));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIFunctionWithIncorrectFormat() throws Exception {
+  

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5185 support Math PI function

2019-03-14 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 3821a15  PHOENIX-5185 support Math PI function
3821a15 is described below

commit 3821a157dd408136b46267dac743c239c4cccbdd
Author: Xinyi Yan 
AuthorDate: Thu Mar 14 14:44:47 2019 -0700

PHOENIX-5185 support Math PI function
---
 .../phoenix/end2end/LnLogFunctionEnd2EndIT.java| 16 --
 .../phoenix/end2end/MathPIFunctionEnd2EndIT.java   | 61 
 .../phoenix/end2end/PowerFunctionEnd2EndIT.java| 16 --
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../expression/function/MathPIFunction.java| 65 ++
 .../apache/phoenix/expression/ExpFunctionTest.java | 19 +--
 .../phoenix/expression/LnLogFunctionTest.java  | 23 ++--
 .../phoenix/expression/MathPIFunctionTest.java | 44 +++
 .../phoenix/expression/PowerFunctionTest.java  | 22 ++--
 .../phoenix/expression/SqrtFunctionTest.java   | 20 +--
 .../java/org/apache/phoenix/query/BaseTest.java| 17 +-
 11 files changed, 199 insertions(+), 105 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
index ddbe2ad..d3d1b51 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
@@ -36,25 +36,9 @@ import org.junit.Test;
 public class LnLogFunctionEnd2EndIT extends ParallelStatsDisabledIT {
 
 private static final String KEY = "key";
-private static final double ZERO = 1e-9;
 private String signedTableName;
 private String unsignedTableName;
 
-private static boolean twoDoubleEquals(double a, double b) {
-if (Double.isNaN(a) ^ Double.isNaN(b)) return false;
-if (Double.isNaN(a)) return true;
-if (Double.isInfinite(a) ^ Double.isInfinite(b)) return false;
-if (Double.isInfinite(a)) {
-if ((a > 0) ^ (b > 0)) return false;
-else return true;
-}
-if (Math.abs(a - b) <= ZERO) {
-return true;
-} else {
-return false;
-}
-}
-
 @Before
 public void initTable() throws Exception {
 Connection conn = null;
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
new file mode 100644
index 000..9594aec
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.*;
+
+import java.sql.*;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.expression.function.MathPIFunction;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * End to end tests for {@link MathPIFunction}
+ */
+public class MathPIFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testGetMathPIValue() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT PI()");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.PI));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIRoundTwoDecimal() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT ROUND(PI(), 
2)");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), 3.14));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIFunctionWithIncorrectFormat() throws Exception {
+  

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5185 support Math PI function

2019-03-14 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 760685f  PHOENIX-5185 support Math PI function
760685f is described below

commit 760685fd7c019ed1a9d14db33dc5f4e53531243b
Author: Xinyi Yan 
AuthorDate: Thu Mar 14 14:45:02 2019 -0700

PHOENIX-5185 support Math PI function
---
 .../phoenix/end2end/LnLogFunctionEnd2EndIT.java| 16 --
 .../phoenix/end2end/MathPIFunctionEnd2EndIT.java   | 61 
 .../phoenix/end2end/PowerFunctionEnd2EndIT.java| 16 --
 .../apache/phoenix/expression/ExpressionType.java  |  1 +
 .../expression/function/MathPIFunction.java| 65 ++
 .../apache/phoenix/expression/ExpFunctionTest.java | 19 +--
 .../phoenix/expression/LnLogFunctionTest.java  | 23 ++--
 .../phoenix/expression/MathPIFunctionTest.java | 44 +++
 .../phoenix/expression/PowerFunctionTest.java  | 22 ++--
 .../phoenix/expression/SqrtFunctionTest.java   | 20 +--
 .../java/org/apache/phoenix/query/BaseTest.java| 17 +-
 11 files changed, 199 insertions(+), 105 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
index ddbe2ad..d3d1b51 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/LnLogFunctionEnd2EndIT.java
@@ -36,25 +36,9 @@ import org.junit.Test;
 public class LnLogFunctionEnd2EndIT extends ParallelStatsDisabledIT {
 
 private static final String KEY = "key";
-private static final double ZERO = 1e-9;
 private String signedTableName;
 private String unsignedTableName;
 
-private static boolean twoDoubleEquals(double a, double b) {
-if (Double.isNaN(a) ^ Double.isNaN(b)) return false;
-if (Double.isNaN(a)) return true;
-if (Double.isInfinite(a) ^ Double.isInfinite(b)) return false;
-if (Double.isInfinite(a)) {
-if ((a > 0) ^ (b > 0)) return false;
-else return true;
-}
-if (Math.abs(a - b) <= ZERO) {
-return true;
-} else {
-return false;
-}
-}
-
 @Before
 public void initTable() throws Exception {
 Connection conn = null;
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
new file mode 100644
index 000..9594aec
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathPIFunctionEnd2EndIT.java
@@ -0,0 +1,61 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.junit.Assert.*;
+
+import java.sql.*;
+
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.expression.function.MathPIFunction;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * End to end tests for {@link MathPIFunction}
+ */
+public class MathPIFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+@Test
+public void testGetMathPIValue() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT PI()");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.PI));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIRoundTwoDecimal() throws Exception {
+Connection conn  = DriverManager.getConnection(getUrl());
+ResultSet rs = conn.createStatement().executeQuery("SELECT ROUND(PI(), 
2)");
+assertTrue(rs.next());
+assertTrue(twoDoubleEquals(rs.getDouble(1), 3.14));
+assertFalse(rs.next());
+}
+
+@Test
+public void testMathPIFunctionWithIncorrectFormat() throws Exception {
+  

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism and more logging

2019-03-19 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 7865aff  PHOENIX-5172: Harden the PQS canary synth test tool with 
retry mechanism and more logging
7865aff is described below

commit 7865aff02d3d11e9e49a3ba5aebefe621e220cdd
Author: Swaroopa Kadam 
AuthorDate: Tue Mar 19 13:39:45 2019 -0700

PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism 
and more logging
---
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java | 212 ++--
 .../tool/ParameterizedPhoenixCanaryToolIT.java | 280 +
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java |  53 +---
 .../resources/phoenix-canary-file-sink.properties  |  17 ++
 4 files changed, 378 insertions(+), 184 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
index 405f54f..865d210 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
@@ -28,18 +28,20 @@ import 
net.sourceforge.argparse4j.inf.ArgumentParserException;
 import net.sourceforge.argparse4j.inf.Namespace;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import java.io.File;
 import java.io.InputStream;
 import java.sql.Connection;
-import java.sql.DatabaseMetaData;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.sql.Statement;
+import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
@@ -49,16 +51,23 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
 /**
- * A Canary Tool to perform synthetic tests for Query Server
+ * A Canary Tool to perform synthetic tests for Phoenix
+ * It assumes that TEST.PQSTEST or the schema.table passed in the argument
+ * is already present as following command
+ * CREATE TABLE IF NOT EXISTS TEST.PQSTEST (mykey INTEGER NOT NULL
+ * PRIMARY KEY, mycolumn VARCHAR, insert_date TIMESTAMP);
+ *
  */
 public class PhoenixCanaryTool extends Configured implements Tool {
 
 private static String TEST_SCHEMA_NAME = "TEST";
 private static String TEST_TABLE_NAME = "PQSTEST";
 private static String FQ_TABLE_NAME = "TEST.PQSTEST";
-private boolean USE_NAMESPACE = true;
-
+private static Timestamp timestamp;
+private static final int MAX_CONNECTION_ATTEMPTS = 5;
+private final int FIRST_TIME_RETRY_TIMEOUT = 5000;
 private Sink sink = new StdOutSink();
+public static final String propFileName = 
"phoenix-canary-file-sink.properties";
 
 /**
  * Base class for a Canary Test
@@ -97,84 +106,38 @@ public class PhoenixCanaryTool extends Configured 
implements Tool {
 }
 }
 
-/**
- * Test which prepares environment before other tests run
- */
-static class PrepareTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("prepare");
-Statement statement = connection.createStatement();
-DatabaseMetaData dbm = connection.getMetaData();
-ResultSet tables = dbm.getTables(null, TEST_SCHEMA_NAME, 
TEST_TABLE_NAME, null);
-if (tables.next()) {
-// Drop test Table if exists
-statement.executeUpdate("DROP TABLE IF EXISTS " + 
FQ_TABLE_NAME);
-}
-
-// Drop test schema if exists
-if (TEST_SCHEMA_NAME != null) {
-statement = connection.createStatement();
-statement.executeUpdate("DROP SCHEMA IF EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-}
-
-/**
- * Create Schema Test
- */
-static class CreateSchemaTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createSchema");
-Statement statement = connection.createStatement();
-statement.executeUpdate("CREATE SCHEMA IF NOT EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-
-/**
- * Create Table Test
- */
-static class CreateTableTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createTable");
-Statement statement = connection.createStatement();
-// Create Table
-statement.executeUpdate

[phoenix] branch master updated: PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism and more logging

2019-03-19 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c06bb59  PHOENIX-5172: Harden the PQS canary synth test tool with 
retry mechanism and more logging
c06bb59 is described below

commit c06bb592ca489e41b7df11cab248135ad534416d
Author: Swaroopa Kadam 
AuthorDate: Tue Mar 19 13:39:45 2019 -0700

PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism 
and more logging
---
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java | 212 ++--
 .../tool/ParameterizedPhoenixCanaryToolIT.java | 280 +
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java |  53 +---
 .../resources/phoenix-canary-file-sink.properties  |  17 ++
 4 files changed, 378 insertions(+), 184 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
index 405f54f..865d210 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
@@ -28,18 +28,20 @@ import 
net.sourceforge.argparse4j.inf.ArgumentParserException;
 import net.sourceforge.argparse4j.inf.Namespace;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import java.io.File;
 import java.io.InputStream;
 import java.sql.Connection;
-import java.sql.DatabaseMetaData;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.sql.Statement;
+import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
@@ -49,16 +51,23 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
 /**
- * A Canary Tool to perform synthetic tests for Query Server
+ * A Canary Tool to perform synthetic tests for Phoenix
+ * It assumes that TEST.PQSTEST or the schema.table passed in the argument
+ * is already present as following command
+ * CREATE TABLE IF NOT EXISTS TEST.PQSTEST (mykey INTEGER NOT NULL
+ * PRIMARY KEY, mycolumn VARCHAR, insert_date TIMESTAMP);
+ *
  */
 public class PhoenixCanaryTool extends Configured implements Tool {
 
 private static String TEST_SCHEMA_NAME = "TEST";
 private static String TEST_TABLE_NAME = "PQSTEST";
 private static String FQ_TABLE_NAME = "TEST.PQSTEST";
-private boolean USE_NAMESPACE = true;
-
+private static Timestamp timestamp;
+private static final int MAX_CONNECTION_ATTEMPTS = 5;
+private final int FIRST_TIME_RETRY_TIMEOUT = 5000;
 private Sink sink = new StdOutSink();
+public static final String propFileName = 
"phoenix-canary-file-sink.properties";
 
 /**
  * Base class for a Canary Test
@@ -97,84 +106,38 @@ public class PhoenixCanaryTool extends Configured 
implements Tool {
 }
 }
 
-/**
- * Test which prepares environment before other tests run
- */
-static class PrepareTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("prepare");
-Statement statement = connection.createStatement();
-DatabaseMetaData dbm = connection.getMetaData();
-ResultSet tables = dbm.getTables(null, TEST_SCHEMA_NAME, 
TEST_TABLE_NAME, null);
-if (tables.next()) {
-// Drop test Table if exists
-statement.executeUpdate("DROP TABLE IF EXISTS " + 
FQ_TABLE_NAME);
-}
-
-// Drop test schema if exists
-if (TEST_SCHEMA_NAME != null) {
-statement = connection.createStatement();
-statement.executeUpdate("DROP SCHEMA IF EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-}
-
-/**
- * Create Schema Test
- */
-static class CreateSchemaTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createSchema");
-Statement statement = connection.createStatement();
-statement.executeUpdate("CREATE SCHEMA IF NOT EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-
-/**
- * Create Table Test
- */
-static class CreateTableTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createTable");
-Statement statement = connection.createStatement();
-// Create Table
-statement.executeUpdate("CRE

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism and more logging

2019-03-19 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new d0cf340  PHOENIX-5172: Harden the PQS canary synth test tool with 
retry mechanism and more logging
d0cf340 is described below

commit d0cf34075e58511625db48a942cea77adc6e114f
Author: Swaroopa Kadam 
AuthorDate: Tue Mar 19 13:39:45 2019 -0700

PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism 
and more logging
---
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java | 212 ++--
 .../tool/ParameterizedPhoenixCanaryToolIT.java | 280 +
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java |  53 +---
 .../resources/phoenix-canary-file-sink.properties  |  17 ++
 4 files changed, 378 insertions(+), 184 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
index 405f54f..865d210 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
@@ -28,18 +28,20 @@ import 
net.sourceforge.argparse4j.inf.ArgumentParserException;
 import net.sourceforge.argparse4j.inf.Namespace;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import java.io.File;
 import java.io.InputStream;
 import java.sql.Connection;
-import java.sql.DatabaseMetaData;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.sql.Statement;
+import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
@@ -49,16 +51,23 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
 /**
- * A Canary Tool to perform synthetic tests for Query Server
+ * A Canary Tool to perform synthetic tests for Phoenix
+ * It assumes that TEST.PQSTEST or the schema.table passed in the argument
+ * is already present as following command
+ * CREATE TABLE IF NOT EXISTS TEST.PQSTEST (mykey INTEGER NOT NULL
+ * PRIMARY KEY, mycolumn VARCHAR, insert_date TIMESTAMP);
+ *
  */
 public class PhoenixCanaryTool extends Configured implements Tool {
 
 private static String TEST_SCHEMA_NAME = "TEST";
 private static String TEST_TABLE_NAME = "PQSTEST";
 private static String FQ_TABLE_NAME = "TEST.PQSTEST";
-private boolean USE_NAMESPACE = true;
-
+private static Timestamp timestamp;
+private static final int MAX_CONNECTION_ATTEMPTS = 5;
+private final int FIRST_TIME_RETRY_TIMEOUT = 5000;
 private Sink sink = new StdOutSink();
+public static final String propFileName = 
"phoenix-canary-file-sink.properties";
 
 /**
  * Base class for a Canary Test
@@ -97,84 +106,38 @@ public class PhoenixCanaryTool extends Configured 
implements Tool {
 }
 }
 
-/**
- * Test which prepares environment before other tests run
- */
-static class PrepareTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("prepare");
-Statement statement = connection.createStatement();
-DatabaseMetaData dbm = connection.getMetaData();
-ResultSet tables = dbm.getTables(null, TEST_SCHEMA_NAME, 
TEST_TABLE_NAME, null);
-if (tables.next()) {
-// Drop test Table if exists
-statement.executeUpdate("DROP TABLE IF EXISTS " + 
FQ_TABLE_NAME);
-}
-
-// Drop test schema if exists
-if (TEST_SCHEMA_NAME != null) {
-statement = connection.createStatement();
-statement.executeUpdate("DROP SCHEMA IF EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-}
-
-/**
- * Create Schema Test
- */
-static class CreateSchemaTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createSchema");
-Statement statement = connection.createStatement();
-statement.executeUpdate("CREATE SCHEMA IF NOT EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-
-/**
- * Create Table Test
- */
-static class CreateTableTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createTable");
-Statement statement = connection.createStatement();
-// Create Table
-statement.executeUpdate

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism and more logging

2019-03-19 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new c2613fe  PHOENIX-5172: Harden the PQS canary synth test tool with 
retry mechanism and more logging
c2613fe is described below

commit c2613fee6e04b16c7b0967dd4dac7b912caf5dbc
Author: Swaroopa Kadam 
AuthorDate: Tue Mar 19 13:39:45 2019 -0700

PHOENIX-5172: Harden the PQS canary synth test tool with retry mechanism 
and more logging
---
 .../org/apache/phoenix/tool/PhoenixCanaryTool.java | 212 ++--
 .../tool/ParameterizedPhoenixCanaryToolIT.java | 280 +
 .../apache/phoenix/tool/PhoenixCanaryToolTest.java |  53 +---
 .../resources/phoenix-canary-file-sink.properties  |  17 ++
 4 files changed, 378 insertions(+), 184 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java 
b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
index 405f54f..865d210 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/tool/PhoenixCanaryTool.java
@@ -28,18 +28,20 @@ import 
net.sourceforge.argparse4j.inf.ArgumentParserException;
 import net.sourceforge.argparse4j.inf.Namespace;
 import org.apache.hadoop.conf.Configured;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.RetryCounter;
 import org.apache.hadoop.util.Tool;
 import org.apache.hadoop.util.ToolRunner;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
+
 import java.io.File;
 import java.io.InputStream;
 import java.sql.Connection;
-import java.sql.DatabaseMetaData;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
-import java.sql.Statement;
+import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
@@ -49,16 +51,23 @@ import java.util.concurrent.Callable;
 import java.util.concurrent.TimeUnit;
 
 /**
- * A Canary Tool to perform synthetic tests for Query Server
+ * A Canary Tool to perform synthetic tests for Phoenix
+ * It assumes that TEST.PQSTEST or the schema.table passed in the argument
+ * is already present as following command
+ * CREATE TABLE IF NOT EXISTS TEST.PQSTEST (mykey INTEGER NOT NULL
+ * PRIMARY KEY, mycolumn VARCHAR, insert_date TIMESTAMP);
+ *
  */
 public class PhoenixCanaryTool extends Configured implements Tool {
 
 private static String TEST_SCHEMA_NAME = "TEST";
 private static String TEST_TABLE_NAME = "PQSTEST";
 private static String FQ_TABLE_NAME = "TEST.PQSTEST";
-private boolean USE_NAMESPACE = true;
-
+private static Timestamp timestamp;
+private static final int MAX_CONNECTION_ATTEMPTS = 5;
+private final int FIRST_TIME_RETRY_TIMEOUT = 5000;
 private Sink sink = new StdOutSink();
+public static final String propFileName = 
"phoenix-canary-file-sink.properties";
 
 /**
  * Base class for a Canary Test
@@ -97,84 +106,38 @@ public class PhoenixCanaryTool extends Configured 
implements Tool {
 }
 }
 
-/**
- * Test which prepares environment before other tests run
- */
-static class PrepareTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("prepare");
-Statement statement = connection.createStatement();
-DatabaseMetaData dbm = connection.getMetaData();
-ResultSet tables = dbm.getTables(null, TEST_SCHEMA_NAME, 
TEST_TABLE_NAME, null);
-if (tables.next()) {
-// Drop test Table if exists
-statement.executeUpdate("DROP TABLE IF EXISTS " + 
FQ_TABLE_NAME);
-}
-
-// Drop test schema if exists
-if (TEST_SCHEMA_NAME != null) {
-statement = connection.createStatement();
-statement.executeUpdate("DROP SCHEMA IF EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-}
-
-/**
- * Create Schema Test
- */
-static class CreateSchemaTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createSchema");
-Statement statement = connection.createStatement();
-statement.executeUpdate("CREATE SCHEMA IF NOT EXISTS " + 
TEST_SCHEMA_NAME);
-}
-}
-
-/**
- * Create Table Test
- */
-static class CreateTableTest extends CanaryTest {
-void onExecute() throws Exception {
-result.setTestName("createTable");
-Statement statement = connection.createStatement();
-// Create Table
-statement.executeUpdate

[phoenix] branch master updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new e8664d1  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
e8664d1 is described below

commit e8664d14d29ed3803e884f17d341593d41dcef6a
Author: Karan Mehta 
AuthorDate: Wed Apr 3 11:34:26 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
---
 .../phoenix/iterate/ScanningResultIterator.java| 85 --
 1 file changed, 46 insertions(+), 39 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 893eaa2..9a656ee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,6 +17,17 @@
  */
 package org.apache.phoenix.iterate;
 
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS;
@@ -40,6 +51,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
 import org.apache.phoenix.monitoring.CombinableMetric;
+import org.apache.phoenix.monitoring.GlobalClientMetrics;
 import org.apache.phoenix.monitoring.ScanMetricsHolder;
 import org.apache.phoenix.schema.tuple.ResultTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
@@ -47,29 +59,12 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
-private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
-// These metric names are how HBase refers them
-// Since HBase stores these strings as static final, we are using the same 
here
-static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
-static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
-static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
-static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
-static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
-static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
-static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
-static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
-static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
-static final String COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME = "ROWS_SCANNED";
-static final String COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME = 
"ROWS_FILTERED";
-static final String GLOBAL_BYTES_IN_RESULTS_METRIC_NAME = 
"BYTES_IN_RESULTS";
-
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
-this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -81,24 +76,25 @@ public class ScanningResultIterator implements 
ResultIterator {
 scanner.close

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new eb5b82b  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
eb5b82b is described below

commit eb5b82b0c61255a9e4f30248d7e06e9612df1dd2
Author: Karan Mehta 
AuthorDate: Wed Apr 3 11:34:26 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
---
 .../phoenix/iterate/ScanningResultIterator.java| 85 --
 1 file changed, 46 insertions(+), 39 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 893eaa2..9a656ee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,6 +17,17 @@
  */
 package org.apache.phoenix.iterate;
 
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS;
@@ -40,6 +51,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
 import org.apache.phoenix.monitoring.CombinableMetric;
+import org.apache.phoenix.monitoring.GlobalClientMetrics;
 import org.apache.phoenix.monitoring.ScanMetricsHolder;
 import org.apache.phoenix.schema.tuple.ResultTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
@@ -47,29 +59,12 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
-private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
-// These metric names are how HBase refers them
-// Since HBase stores these strings as static final, we are using the same 
here
-static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
-static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
-static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
-static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
-static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
-static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
-static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
-static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
-static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
-static final String COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME = "ROWS_SCANNED";
-static final String COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME = 
"ROWS_FILTERED";
-static final String GLOBAL_BYTES_IN_RESULTS_METRIC_NAME = 
"BYTES_IN_RESULTS";
-
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
-this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -81,24 +76,25 @@ public class ScanningResultIterator implements 
ResultIterator {
 

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 9e74148  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
9e74148 is described below

commit 9e741482ce12f5e77f82ebbbc3337ee9029157f9
Author: Karan Mehta 
AuthorDate: Wed Apr 3 11:34:26 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
---
 .../phoenix/iterate/ScanningResultIterator.java| 85 --
 1 file changed, 46 insertions(+), 39 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 893eaa2..9a656ee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,6 +17,17 @@
  */
 package org.apache.phoenix.iterate;
 
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS;
@@ -40,6 +51,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
 import org.apache.phoenix.monitoring.CombinableMetric;
+import org.apache.phoenix.monitoring.GlobalClientMetrics;
 import org.apache.phoenix.monitoring.ScanMetricsHolder;
 import org.apache.phoenix.schema.tuple.ResultTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
@@ -47,29 +59,12 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
-private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
-// These metric names are how HBase refers them
-// Since HBase stores these strings as static final, we are using the same 
here
-static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
-static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
-static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
-static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
-static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
-static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
-static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
-static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
-static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
-static final String COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME = "ROWS_SCANNED";
-static final String COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME = 
"ROWS_FILTERED";
-static final String GLOBAL_BYTES_IN_RESULTS_METRIC_NAME = 
"BYTES_IN_RESULTS";
-
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
-this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -81,24 +76,25 @@ public class ScanningResultIterator implements 
ResultIterator {
 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new b03f84f  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
b03f84f is described below

commit b03f84f318b4f37197a3a85273db8b1afe6246ce
Author: Karan Mehta 
AuthorDate: Wed Apr 3 11:34:26 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE
---
 .../phoenix/iterate/ScanningResultIterator.java| 85 --
 1 file changed, 46 insertions(+), 39 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 893eaa2..9a656ee 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,6 +17,17 @@
  */
 package org.apache.phoenix.iterate;
 
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS;
@@ -40,6 +51,7 @@ import org.apache.hadoop.hbase.client.ResultScanner;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.client.metrics.ScanMetrics;
 import org.apache.phoenix.monitoring.CombinableMetric;
+import org.apache.phoenix.monitoring.GlobalClientMetrics;
 import org.apache.phoenix.monitoring.ScanMetricsHolder;
 import org.apache.phoenix.schema.tuple.ResultTuple;
 import org.apache.phoenix.schema.tuple.Tuple;
@@ -47,29 +59,12 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
-private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
-// These metric names are how HBase refers them
-// Since HBase stores these strings as static final, we are using the same 
here
-static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
-static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
-static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
-static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
-static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
-static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
-static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
-static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
-static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
-static final String COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME = "ROWS_SCANNED";
-static final String COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME = 
"ROWS_FILTERED";
-static final String GLOBAL_BYTES_IN_RESULTS_METRIC_NAME = 
"BYTES_IN_RESULTS";
-
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
-this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -81,24 +76,25 @@ public class ScanningResultIterator implements 
ResultIterator {
 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE (Addendum)

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new ac2fbbf  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE 
(Addendum)
ac2fbbf is described below

commit ac2fbbf12d839dfdabe295342e1a1f2767a8ceb9
Author: Karan Mehta 
AuthorDate: Wed Apr 3 15:13:50 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE (Addendum)
---
 .../phoenix/iterate/ScanningResultIterator.java| 25 +-
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 9a656ee..1422455 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,15 +17,6 @@
  */
 package org.apache.phoenix.iterate;
 
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
 import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
 import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
@@ -59,12 +50,26 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
+private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
+// These metric names are how HBase refers them
+// Since HBase stores these strings as static final, we are using the same 
here
+static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
+static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
+static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
+static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
+static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
+static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
+static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
+static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
+static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
+
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
+this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -91,7 +96,7 @@ public class ScanningResultIterator implements ResultIterator 
{
 private void getScanMetrics() {
 
 if (scanMetricsEnabled && !scanMetricsUpdated) {
-ScanMetrics scanMetrics = scanner.getScanMetrics();
+ScanMetrics scanMetrics = scan.getScanMetrics();
 Map scanMetricsMap = scanMetrics.getMetricsMap();
 scanMetricsHolder.setScanMetricMap(scanMetricsMap);
 



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE (Addendum)

2019-04-03 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new d442c51  PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE 
(Addendum)
d442c51 is described below

commit d442c5124dab7d6c3f9c004d7a7a78c7cb258ddf
Author: Karan Mehta 
AuthorDate: Wed Apr 3 15:12:19 2019 -0700

PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE (Addendum)
---
 .../phoenix/iterate/ScanningResultIterator.java| 37 --
 1 file changed, 21 insertions(+), 16 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
index 9a656ee..8a1fe5a 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/ScanningResultIterator.java
@@ -17,17 +17,8 @@
  */
 package org.apache.phoenix.iterate;
 
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_REMOTE_RESULTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.BYTES_IN_RESULTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.MILLIS_BETWEEN_NEXTS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.NOT_SERVING_REGION_EXCEPTION_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REGIONS_SCANNED_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_CALLS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.REMOTE_RPC_RETRIES_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_CALLS_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ScanMetrics.RPC_RETRIES_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME;
-import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_FILTERED_KEY;
+import static 
org.apache.hadoop.hbase.client.metrics.ServerSideScanMetrics.COUNT_OF_ROWS_SCANNED_KEY;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS;
 import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS;
@@ -59,12 +50,26 @@ import org.apache.phoenix.util.ServerUtil;
 
 public class ScanningResultIterator implements ResultIterator {
 private final ResultScanner scanner;
+private final Scan scan;
 private final ScanMetricsHolder scanMetricsHolder;
 boolean scanMetricsUpdated;
 boolean scanMetricsEnabled;
 
+// These metric names are how HBase refers them
+// Since HBase stores these strings as static final, we are using the same 
here
+static final String RPC_CALLS_METRIC_NAME = "RPC_CALLS";
+static final String REMOTE_RPC_CALLS_METRIC_NAME = "REMOTE_RPC_CALLS";
+static final String MILLIS_BETWEEN_NEXTS_METRIC_NAME = 
"MILLIS_BETWEEN_NEXTS";
+static final String NOT_SERVING_REGION_EXCEPTION_METRIC_NAME = 
"NOT_SERVING_REGION_EXCEPTION";
+static final String BYTES_IN_RESULTS_METRIC_NAME = "BYTES_IN_RESULTS";
+static final String BYTES_IN_REMOTE_RESULTS_METRIC_NAME = 
"BYTES_IN_REMOTE_RESULTS";
+static final String REGIONS_SCANNED_METRIC_NAME = "REGIONS_SCANNED";
+static final String RPC_RETRIES_METRIC_NAME = "RPC_RETRIES";
+static final String REMOTE_RPC_RETRIES_METRIC_NAME = "REMOTE_RPC_RETRIES";
+
 public ScanningResultIterator(ResultScanner scanner, Scan scan, 
ScanMetricsHolder scanMetricsHolder) {
 this.scanner = scanner;
+this.scan = scan;
 this.scanMetricsHolder = scanMetricsHolder;
 scanMetricsUpdated = false;
 scanMetricsEnabled = scan.isScanMetricsEnabled();
@@ -91,7 +96,7 @@ public class ScanningResultIterator implements ResultIterator 
{
 private void getScanMetrics() {
 
 if (scanMetricsEnabled && !scanMetricsUpdated) {
-ScanMetrics scanMetrics = scanner.getScanMetrics();
+ScanMetrics scanMetrics = scan.getScanMetrics();
 Map scanMetricsMap = scanMetrics.getMetricsMap();
 scanMetricsHolder.setScanMetricMap(scanMetricsMap);
 
@@ -114,9 +119,9 @@ public class ScanningResultIterator implements 
ResultIterator {
 changeMetric(scanMetricsHolder.ge

[phoenix] branch phoenix-stats updated: PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and

2019-04-08 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch phoenix-stats
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/phoenix-stats by this push:
 new a62380f  PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) 
returns wrong result when two key ranges have the same upper bound values but 
one is inclusive and another is exclusive
a62380f is described below

commit a62380f0f2603dab7a8b3f642d5090b05e0b51cc
Author: Bin Shi <39923490+binshi-secularb...@users.noreply.github.com>
AuthorDate: Mon Apr 8 16:40:30 2019 -0700

PHOENIX-5176 KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong 
result when two key ranges have the same upper bound values but one is 
inclusive and another is exclusive
---
 .../java/org/apache/phoenix/query/KeyRange.java|   4 +-
 .../org/apache/phoenix/query/KeyRangeMoreTest.java | 136 -
 2 files changed, 77 insertions(+), 63 deletions(-)

diff --git a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java 
b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
index 7d09adb..e747c37 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/query/KeyRange.java
@@ -519,7 +519,7 @@ public class KeyRange implements Writable {
 return Lists.transform(keys, POINT);
 }
 
-private static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
+public static int compareUpperRange(KeyRange rowKeyRange1,KeyRange 
rowKeyRange2) {
 int result = Boolean.compare(rowKeyRange1.upperUnbound(), 
rowKeyRange2.upperUnbound());
 if (result != 0) {
 return result;
@@ -528,7 +528,7 @@ public class KeyRange implements Writable {
 if (result != 0) {
 return result;
 }
-return Boolean.compare(rowKeyRange2.isUpperInclusive(), 
rowKeyRange1.isUpperInclusive());
+return Boolean.compare(rowKeyRange1.isUpperInclusive(), 
rowKeyRange2.isUpperInclusive());
 }
 
 public static List intersect(List rowKeyRanges1, 
List rowKeyRanges2) {
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
index 9710bf5..6f0c4c7 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/KeyRangeMoreTest.java
@@ -23,6 +23,7 @@ import java.util.Arrays;
 import java.util.Collections;
 import java.util.List;
 
+import com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.schema.types.PInteger;
 import org.junit.Test;
@@ -126,8 +127,7 @@ public class KeyRangeMoreTest extends TestCase {
 assertResult(result, maxStart,minEnd);
 Collections.shuffle(rowKeyRanges1);
 Collections.shuffle(rowKeyRanges2);
-
-};
+}
 }
 
 private void assertResult(List result,int start,int end) {
@@ -192,72 +192,86 @@ public class KeyRangeMoreTest extends TestCase {
 
 
listIntersectAndAssert(Arrays.asList(KeyRange.EMPTY_RANGE),Arrays.asList(KeyRange.EVERYTHING_RANGE),Arrays.asList(KeyRange.EMPTY_RANGE));
 
-rowKeyRanges1=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-true,
-PInteger.INSTANCE.toBytes(5),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(8),
-true,
-KeyRange.UNBOUND,
-false));
-rowKeyRanges2=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-KeyRange.UNBOUND,
-false,
-PInteger.INSTANCE.toBytes(4),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(7),
-true,
-PInteger.INSTANCE.toBytes(10),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(13),
-true,
-PInteger.INSTANCE.toBytes(14),
-true),
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(19),
-true,
-KeyRange.UNBOUND,
-false)
-);
-expected=Arrays.asList(
-PInteger.INSTANCE.getKeyRange(
-PInteger.INSTANCE.toBytes(2),
-  

[phoenix] branch master updated: PHOENIX-5181 support Math sin/cos/tan functions

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new d9b7148  PHOENIX-5181 support Math sin/cos/tan functions
d9b7148 is described below

commit d9b714893b810e1591d8f086155f029e8ccc7c67
Author: Xinyi Yan 
AuthorDate: Thu Mar 7 10:48:57 2019 -0800

PHOENIX-5181 support Math sin/cos/tan functions
---
 .../phoenix/end2end/MathTrigFunctionEnd2EndIT.java |  94 +++
 .../apache/phoenix/expression/ExpressionType.java  |   3 +
 .../phoenix/expression/function/CosFunction.java   |  56 +++
 .../phoenix/expression/function/SinFunction.java   |  56 +++
 .../phoenix/expression/function/TanFunction.java   |  56 +++
 .../phoenix/expression/MathTrigFunctionTest.java   | 179 +
 6 files changed, 444 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
new file mode 100644
index 000..b4f2b4f
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.closeStmtAndConn;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+
+import org.apache.phoenix.expression.function.CosFunction;
+import org.apache.phoenix.expression.function.SinFunction;
+import org.apache.phoenix.expression.function.TanFunction;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * End to end tests for
+ * {@link org.apache.phoenix.expression.function.CosFunction}
+ * {@link org.apache.phoenix.expression.function.SinFunction}
+ * {@link org.apache.phoenix.expression.function.TanFunction}
+ */
+
+public class MathTrigFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+private static final String KEY = "key";
+private String tableName;
+
+@Before
+public void initTable() throws Exception {
+Connection conn = null;
+PreparedStatement stmt = null;
+tableName = generateUniqueName();
+
+try {
+conn = DriverManager.getConnection(getUrl());
+String ddl;
+ddl =
+"CREATE TABLE " + tableName + " (k VARCHAR NOT NULL 
PRIMARY KEY, doub DOUBLE)";
+conn.createStatement().execute(ddl);
+conn.commit();
+} finally {
+closeStmtAndConn(stmt, conn);
+}
+}
+
+private void updateTableSpec(Connection conn, double data, String 
tableName) throws Exception {
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName + " VALUES 
(?, ?)");
+stmt.setString(1, KEY);
+stmt.setDouble(2, data);
+stmt.executeUpdate();
+conn.commit();
+}
+
+private void testNumberSpec(Connection conn, double data, String 
tableName) throws Exception {
+updateTableSpec(conn, data, tableName);
+ResultSet rs =
+conn.createStatement().executeQuery(
+"SELECT SIN(doub),COS(doub),TAN(doub) FROM " + 
tableName);
+assertTrue(rs.next());
+Double d = Double.valueOf(data);
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.sin(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(2), Math.cos(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(3), Math.tan(data)));
+
+assertTrue(!rs.next());
+}
+
+@Test
+public void test() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+for (double d : new double[] { 0.0, 1.0, -1.0, 123.1234, -123.1234 }) {
+testNumberSpec(conn, d, tableName);
+}
+}
+}
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5181 support Math sin/cos/tan functions

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new c83bce2  PHOENIX-5181 support Math sin/cos/tan functions
c83bce2 is described below

commit c83bce20cd4e4c1b7fc626d70ad56512baa93b3e
Author: Xinyi Yan 
AuthorDate: Fri Apr 19 16:25:46 2019 -0700

PHOENIX-5181 support Math sin/cos/tan functions
---
 .../phoenix/end2end/MathTrigFunctionEnd2EndIT.java |  94 +++
 .../apache/phoenix/expression/ExpressionType.java  |   3 +
 .../phoenix/expression/function/CosFunction.java   |  56 +++
 .../phoenix/expression/function/SinFunction.java   |  56 +++
 .../phoenix/expression/function/TanFunction.java   |  56 +++
 .../expression/function/MathTrigFunctionTest.java  | 179 +
 6 files changed, 444 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
new file mode 100644
index 000..7ea3c0d
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.closeStmtAndConn;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+
+import org.apache.phoenix.expression.function.CosFunction;
+import org.apache.phoenix.expression.function.SinFunction;
+import org.apache.phoenix.expression.function.TanFunction;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * End to end tests for
+ * {@link org.apache.phoenix.expression.function.CosFunction}
+ * {@link org.apache.phoenix.expression.function.SinFunction}
+ * {@link org.apache.phoenix.expression.function.TanFunction}
+ */
+
+public class MathTrigFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+private static final String KEY = "key";
+private String tableName;
+
+@Before
+public void initTable() throws Exception {
+Connection conn = null;
+PreparedStatement stmt = null;
+tableName = generateUniqueName();
+
+try {
+conn = DriverManager.getConnection(getUrl());
+String ddl;
+ddl =
+"CREATE TABLE " + tableName + " (k VARCHAR NOT NULL 
PRIMARY KEY, doub DOUBLE)";
+conn.createStatement().execute(ddl);
+conn.commit();
+} finally {
+closeStmtAndConn(stmt, conn);
+}
+}
+
+private void updateTableSpec(Connection conn, double data, String 
tableName) throws Exception {
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName + " VALUES 
(?, ?)");
+stmt.setString(1, KEY);
+stmt.setDouble(2, data);
+stmt.executeUpdate();
+conn.commit();
+}
+
+private void testNumberSpec(Connection conn, double data, String 
tableName) throws Exception {
+updateTableSpec(conn, data, tableName);
+ResultSet rs =
+conn.createStatement().executeQuery(
+"SELECT SIN(doub),COS(doub),TAN(doub) FROM " + 
tableName);
+assertTrue(rs.next());
+Double d = Double.valueOf(data);
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.sin(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(2), Math.cos(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(3), Math.tan(data)));
+
+assertTrue(!rs.next());
+}
+
+@Test
+public void test() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+for (double d : new double[] { 0.0, 1.0, -1.0, 123.1234, -123.1234 }) {
+testNumberSpec(conn, d, tableName);
+}
+}
+}
\ No newline at end of file
diff --git 
a/phoenix-core/src

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5181 support Math sin/cos/tan functions

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 003211a  PHOENIX-5181 support Math sin/cos/tan functions
003211a is described below

commit 003211a561129f7f2c1f33651fc99f9df15c8ced
Author: Xinyi Yan 
AuthorDate: Fri Apr 19 16:25:46 2019 -0700

PHOENIX-5181 support Math sin/cos/tan functions
---
 .../phoenix/end2end/MathTrigFunctionEnd2EndIT.java |  94 +++
 .../apache/phoenix/expression/ExpressionType.java  |   3 +
 .../phoenix/expression/function/CosFunction.java   |  56 +++
 .../phoenix/expression/function/SinFunction.java   |  56 +++
 .../phoenix/expression/function/TanFunction.java   |  56 +++
 .../expression/function/MathTrigFunctionTest.java  | 179 +
 6 files changed, 444 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
new file mode 100644
index 000..7ea3c0d
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.closeStmtAndConn;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+
+import org.apache.phoenix.expression.function.CosFunction;
+import org.apache.phoenix.expression.function.SinFunction;
+import org.apache.phoenix.expression.function.TanFunction;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * End to end tests for
+ * {@link org.apache.phoenix.expression.function.CosFunction}
+ * {@link org.apache.phoenix.expression.function.SinFunction}
+ * {@link org.apache.phoenix.expression.function.TanFunction}
+ */
+
+public class MathTrigFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+private static final String KEY = "key";
+private String tableName;
+
+@Before
+public void initTable() throws Exception {
+Connection conn = null;
+PreparedStatement stmt = null;
+tableName = generateUniqueName();
+
+try {
+conn = DriverManager.getConnection(getUrl());
+String ddl;
+ddl =
+"CREATE TABLE " + tableName + " (k VARCHAR NOT NULL 
PRIMARY KEY, doub DOUBLE)";
+conn.createStatement().execute(ddl);
+conn.commit();
+} finally {
+closeStmtAndConn(stmt, conn);
+}
+}
+
+private void updateTableSpec(Connection conn, double data, String 
tableName) throws Exception {
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName + " VALUES 
(?, ?)");
+stmt.setString(1, KEY);
+stmt.setDouble(2, data);
+stmt.executeUpdate();
+conn.commit();
+}
+
+private void testNumberSpec(Connection conn, double data, String 
tableName) throws Exception {
+updateTableSpec(conn, data, tableName);
+ResultSet rs =
+conn.createStatement().executeQuery(
+"SELECT SIN(doub),COS(doub),TAN(doub) FROM " + 
tableName);
+assertTrue(rs.next());
+Double d = Double.valueOf(data);
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.sin(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(2), Math.cos(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(3), Math.tan(data)));
+
+assertTrue(!rs.next());
+}
+
+@Test
+public void test() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+for (double d : new double[] { 0.0, 1.0, -1.0, 123.1234, -123.1234 }) {
+testNumberSpec(conn, d, tableName);
+}
+}
+}
\ No newline at end of file
diff --git 
a/phoenix-core/src

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5181 support Math sin/cos/tan functions

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new b5431c1  PHOENIX-5181 support Math sin/cos/tan functions
b5431c1 is described below

commit b5431c1ee3b610f2d1995a962608b22cc50f70a3
Author: Xinyi Yan 
AuthorDate: Fri Apr 19 16:25:46 2019 -0700

PHOENIX-5181 support Math sin/cos/tan functions
---
 .../phoenix/end2end/MathTrigFunctionEnd2EndIT.java |  94 +++
 .../apache/phoenix/expression/ExpressionType.java  |   3 +
 .../phoenix/expression/function/CosFunction.java   |  56 +++
 .../phoenix/expression/function/SinFunction.java   |  56 +++
 .../phoenix/expression/function/TanFunction.java   |  56 +++
 .../expression/function/MathTrigFunctionTest.java  | 179 +
 6 files changed, 444 insertions(+)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
new file mode 100644
index 000..7ea3c0d
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MathTrigFunctionEnd2EndIT.java
@@ -0,0 +1,94 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.closeStmtAndConn;
+import static org.junit.Assert.assertTrue;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+
+import org.apache.phoenix.expression.function.CosFunction;
+import org.apache.phoenix.expression.function.SinFunction;
+import org.apache.phoenix.expression.function.TanFunction;
+import org.junit.Before;
+import org.junit.Test;
+
+/**
+ * End to end tests for
+ * {@link org.apache.phoenix.expression.function.CosFunction}
+ * {@link org.apache.phoenix.expression.function.SinFunction}
+ * {@link org.apache.phoenix.expression.function.TanFunction}
+ */
+
+public class MathTrigFunctionEnd2EndIT extends ParallelStatsDisabledIT {
+
+private static final String KEY = "key";
+private String tableName;
+
+@Before
+public void initTable() throws Exception {
+Connection conn = null;
+PreparedStatement stmt = null;
+tableName = generateUniqueName();
+
+try {
+conn = DriverManager.getConnection(getUrl());
+String ddl;
+ddl =
+"CREATE TABLE " + tableName + " (k VARCHAR NOT NULL 
PRIMARY KEY, doub DOUBLE)";
+conn.createStatement().execute(ddl);
+conn.commit();
+} finally {
+closeStmtAndConn(stmt, conn);
+}
+}
+
+private void updateTableSpec(Connection conn, double data, String 
tableName) throws Exception {
+PreparedStatement stmt =
+conn.prepareStatement("UPSERT INTO " + tableName + " VALUES 
(?, ?)");
+stmt.setString(1, KEY);
+stmt.setDouble(2, data);
+stmt.executeUpdate();
+conn.commit();
+}
+
+private void testNumberSpec(Connection conn, double data, String 
tableName) throws Exception {
+updateTableSpec(conn, data, tableName);
+ResultSet rs =
+conn.createStatement().executeQuery(
+"SELECT SIN(doub),COS(doub),TAN(doub) FROM " + 
tableName);
+assertTrue(rs.next());
+Double d = Double.valueOf(data);
+assertTrue(twoDoubleEquals(rs.getDouble(1), Math.sin(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(2), Math.cos(data)));
+assertTrue(twoDoubleEquals(rs.getDouble(3), Math.tan(data)));
+
+assertTrue(!rs.next());
+}
+
+@Test
+public void test() throws Exception {
+Connection conn = DriverManager.getConnection(getUrl());
+for (double d : new double[] { 0.0, 1.0, -1.0, 123.1234, -123.1234 }) {
+testNumberSpec(conn, d, tableName);
+}
+}
+}
\ No newline at end of file
diff --git 
a/phoenix-core/src

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3ffcdd6  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
3ffcdd6 is described below

commit 3ffcdd690e222e417da5240cd4f0d0d3c5ef66d0
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 4023383..c482b3f 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -294,26 +292,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new fdc07ee  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
fdc07ee is described below

commit fdc07ee3c490bb218fb2d972bb25f39cfce5f647
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 4023383..c482b3f 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -294,26 +292,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.14-HBase-1.2 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.2 by this push:
 new ea53097  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
ea53097 is described below

commit ea53097f12795552a7ae4019ff2c72ae8fb6b25b
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 205b481..019c326 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -292,26 +290,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 0aec1e6  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
0aec1e6 is described below

commit 0aec1e677015dbca33067e07b155e7ca39534940
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 205b481..019c326 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -292,26 +290,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new ec7054c  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
ec7054c is described below

commit ec7054ca541e9acb5c73272a43bdf93c05ecbf5d
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 4023383..c482b3f 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -294,26 +292,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 1374610  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
1374610 is described below

commit 137461002fefa1aea3cbef5b2980f2b8c4a2cff6
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 4023383..c482b3f 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -294,26 +292,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch master updated: PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in WriteWorkload#upsertData

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 59fbef5  PHOENIX-5195 PHERF:- Handle batch failure in 
connection.commit() in WriteWorkload#upsertData
59fbef5 is described below

commit 59fbef56d67059bd8ea31ec28297ba2aca0e08dd
Author: Monani Mihir 
AuthorDate: Fri Mar 15 13:56:35 2019 +0530

PHOENIX-5195 PHERF:- Handle batch failure in connection.commit() in 
WriteWorkload#upsertData
---
 .../phoenix/pherf/workload/WriteWorkload.java  | 32 --
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 4023383..c482b3f 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -24,7 +24,6 @@ import java.sql.Connection;
 import java.sql.Date;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
-import java.sql.Timestamp;
 import java.sql.Types;
 import java.text.SimpleDateFormat;
 import java.util.ArrayList;
@@ -35,7 +34,6 @@ import java.util.concurrent.ExecutorService;
 import java.util.concurrent.Executors;
 import java.util.concurrent.Future;
 
-import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.pherf.PherfConstants;
 import org.apache.phoenix.pherf.PherfConstants.GeneratePhoenixStats;
 import org.apache.phoenix.pherf.configuration.Column;
@@ -294,26 +292,32 @@ public class WriteWorkload implements Workload {
 rowsCreated += result;
 }
 }
-connection.commit();
-duration = System.currentTimeMillis() - last;
-logger.info("Writer (" + 
Thread.currentThread().getName()
-+ ") committed Batch. Total " + 
getBatchSize()
-+ " rows for this thread (" + 
this.hashCode() + ") in ("
-+ duration + ") Ms");
-
-if (i % PherfConstants.LOG_PER_NROWS == 0 && i != 
0) {
-dataLoadThreadTime
-.add(tableName, 
Thread.currentThread().getName(), i,
-System.currentTimeMillis() - 
logStartTime);
-logStartTime = System.currentTimeMillis();
+try {
+connection.commit();
+duration = System.currentTimeMillis() - last;
+logger.info("Writer (" + 
Thread.currentThread().getName()
++ ") committed Batch. Total " + 
getBatchSize()
++ " rows for this thread (" + 
this.hashCode() + ") in ("
++ duration + ") Ms");
+
+if (i % PherfConstants.LOG_PER_NROWS == 0 && i 
!= 0) {
+dataLoadThreadTime.add(tableName,
+Thread.currentThread().getName(), i,
+System.currentTimeMillis() - 
logStartTime);
+}
+} catch (SQLException e) {
+logger.warn("SQLException in commit 
operation", e);
 }
 
+logStartTime = System.currentTimeMillis();
 // Pause for throttling if configured to do so
 Thread.sleep(threadSleepDuration);
 // Re-compute the start time for the next batch
 last = System.currentTimeMillis();
 }
 }
+} catch (SQLException e) {
+throw e;
 } finally {
 // Need to keep the statement open to send the remaining 
batch of updates
 if (!useBatchApi && stmt != null) {



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 4eb05cc  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
4eb05cc is described below

commit 4eb05ccb1e551628df5fc45041a3bb335b4c9e76
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index c482b3f..b340a2b 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch master updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new c1e7744  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
c1e7744 is described below

commit c1e7744c4b2b4591c15a62421ab39c048fbc2090
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index c482b3f..b340a2b 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new e7bbbf2  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
e7bbbf2 is described below

commit e7bbbf2a36012e2afd7712db5d94465156541e08
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index c482b3f..b340a2b 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ea6cf5f  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
ea6cf5f is described below

commit ea6cf5fe4028840cfe2124be2f5e1d3cf8497900
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index c482b3f..b340a2b 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 746a8ef  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
746a8ef is described below

commit 746a8efa76dd8884db2873647750aa8a79fb6516
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 019c326..0a05dc8 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch 4.14-HBase-1.2 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.2 by this push:
 new 34580c1  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
34580c1 is described below

commit 34580c17b22148105cd5749d08607f5d635a1f6d
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index 019c326..0a05dc8 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5199 Pherf overrides user provided properties like dataloader threadpool, monitor frequency etc with pherf.properties

2019-04-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 2534a95  PHOENIX-5199 Pherf overrides user provided properties like 
dataloader threadpool, monitor frequency etc with pherf.properties
2534a95 is described below

commit 2534a954b58df027c80ea9877e13131302e84092
Author: Monani Mihir 
AuthorDate: Fri Mar 15 16:48:12 2019 +0530

PHOENIX-5199 Pherf overrides user provided properties like dataloader 
threadpool, monitor frequency etc with pherf.properties
---
 phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java   | 2 +-
 .../main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java| 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
index 43061e0..d92ffde 100644
--- a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
+++ b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/Pherf.java
@@ -156,7 +156,7 @@ public class Pherf {
 writerThreadPoolSize =
 command.getOptionValue("writerThreadSize",
 
properties.getProperty("pherf.default.dataloader.threadpool"));
-properties.setProperty("pherf. default.dataloader.threadpool", 
writerThreadPoolSize);
+properties.setProperty("pherf.default.dataloader.threadpool", 
writerThreadPoolSize);
 label = command.getOptionValue("label", null);
 compareResults = command.getOptionValue("compare", null);
 compareType = command.hasOption("useAverageCompareType") ? 
CompareType.AVERAGE : CompareType.MINIMUM;
diff --git 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
index c482b3f..b340a2b 100644
--- 
a/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
+++ 
b/phoenix-pherf/src/main/java/org/apache/phoenix/pherf/workload/WriteWorkload.java
@@ -84,8 +84,8 @@ public class WriteWorkload implements Workload {
 
 public WriteWorkload(PhoenixUtil phoenixUtil, XMLConfigParser parser, 
Scenario scenario, GeneratePhoenixStats generateStatistics)
 throws Exception {
-this(phoenixUtil, 
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES,
-false),
+this(phoenixUtil,
+
PherfConstants.create().getProperties(PherfConstants.PHERF_PROPERTIES, true),
 parser, scenario, generateStatistics);
 }
 



[phoenix] branch master updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 7b0246e  PHOENIX-5231 Configurable Stats Cache
7b0246e is described below

commit 7b0246e4f031eabab12a4176052de5ac7bcbe1f3
Author: Daniel 
AuthorDate: Fri Apr 5 19:00:23 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 255 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 +
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 +
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 261 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,4)");
+conn.createStatement().execute("

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 1450b3d  PHOENIX-5231 Configurable Stats Cache
1450b3d is described below

commit 1450b3d251743122f55c3b685641993405b0d432
Author: Daniel Wong <41923099+dbw...@users.noreply.github.com>
AuthorDate: Wed May 15 21:38:15 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 259 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 +
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 +
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 265 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new a4f1736  PHOENIX-5231 Configurable Stats Cache
a4f1736 is described below

commit a4f17362a3aae4115f3410eb1575cbf408abce8b
Author: Daniel Wong <41923099+dbw...@users.noreply.github.com>
AuthorDate: Wed May 15 21:38:35 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 262 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 268 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,4)");

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new a75f7ee  PHOENIX-5231 Configurable Stats Cache
a75f7ee is described below

commit a75f7ee0a398aa2fb25d48f827c7c20415919352
Author: Daniel Wong <41923099+dbw...@users.noreply.github.com>
AuthorDate: Wed May 15 21:38:27 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 259 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 +
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 +
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 265 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 34ffbb9  PHOENIX-5231 Configurable Stats Cache
34ffbb9 is described below

commit 34ffbb9ad4ed66ee5f8728faa6c22dba5238ad8d
Author: Daniel Wong <41923099+dbw...@users.noreply.github.com>
AuthorDate: Wed May 15 21:38:46 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 262 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 268 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,4)");

[phoenix] branch phoenix-stats updated: PHOENIX-5231 Configurable Stats Cache

2019-05-15 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch phoenix-stats
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/phoenix-stats by this push:
 new 097bc00  PHOENIX-5231 Configurable Stats Cache
097bc00 is described below

commit 097bc00b5ad66a90b725bc9c222389f5b8f3361e
Author: Daniel Wong <41923099+dbw...@users.noreply.github.com>
AuthorDate: Wed May 15 21:38:54 2019 -0700

PHOENIX-5231 Configurable Stats Cache
---
 .../phoenix/end2end/ConfigurableCacheIT.java   | 150 
 .../PhoenixNonRetryableRuntimeException.java   |  34 +++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  17 +-
 .../query/ConnectionlessQueryServicesImpl.java |  18 +-
 .../query/DefaultGuidePostsCacheFactory.java   |  45 
 .../org/apache/phoenix/query/EmptyStatsLoader.java |  35 +++
 .../org/apache/phoenix/query/GuidePostsCache.java  | 259 +
 .../phoenix/query/GuidePostsCacheFactory.java  |  46 
 .../apache/phoenix/query/GuidePostsCacheImpl.java  | 148 
 .../phoenix/query/GuidePostsCacheProvider.java |  79 +++
 .../phoenix/query/GuidePostsCacheWrapper.java  |  74 ++
 .../phoenix/query/ITGuidePostsCacheFactory.java|  52 +
 .../phoenix/query/PhoenixStatsCacheLoader.java |   2 +-
 .../org/apache/phoenix/query/QueryServices.java|   5 +
 .../apache/phoenix/query/QueryServicesOptions.java |   2 +
 .../org/apache/phoenix/query/StatsLoaderImpl.java  | 104 +
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  17 ++
 .../phoenix/query/GuidePostsCacheProviderTest.java | 122 ++
 .../phoenix/query/GuidePostsCacheWrapperTest.java  | 106 +
 .../phoenix/query/PhoenixStatsCacheLoaderTest.java |   2 +-
 .../PhoenixStatsCacheRemovalListenerTest.java  |   2 +-
 ...org.apache.phoenix.query.GuidePostsCacheFactory |  19 ++
 22 files changed, 1073 insertions(+), 265 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
new file mode 100644
index 000..4043052
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConfigurableCacheIT.java
@@ -0,0 +1,150 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license
+ * agreements. See the NOTICE file distributed with this work for additional 
information regarding
+ * copyright ownership. The ASF licenses this file to you under the Apache 
License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the 
License. You may obtain a
+ * copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless 
required by applicable
+ * law or agreed to in writing, software distributed under the License is 
distributed on an "AS IS"
+ * BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied. See the License
+ * for the specific language governing permissions and limitations under the 
License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.util.Properties;
+
+import org.apache.phoenix.query.ITGuidePostsCacheFactory;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * This tests that the configured client statistics cache is used during 
execution.  These tests
+ * use a class ITGuidePostsCacheFactory which is for testing only that keeps 
track of the number
+ * of cache instances generated.
+ */
+public class ConfigurableCacheIT extends ParallelStatsEnabledIT {
+
+static String table;
+
+@BeforeClass
+public static void initTables() throws Exception {
+table = generateUniqueName();
+// Use phoenix test driver for setup
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement()
+.execute("CREATE TABLE " + table
++ " (k INTEGER PRIMARY KEY, c1.a bigint, c2.b 
bigint)"
++ " GUIDE_POSTS_WIDTH=20");
+conn.createStatement().execute("upsert into " + table + " values 
(100,1,3)");
+conn.createStatement().execute("upsert into " + table + " values 
(101,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(102,2,4)");
+conn.createStatement().execute("upsert into " + table + " values 
(103,2,

[phoenix-queryserver] branch master updated: PHOENIX-5255 Create Orchestrator for QueryServerCanaryTool in phoenix-queryserver project

2019-05-18 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git


The following commit(s) were added to refs/heads/master by this push:
 new d6f93a5  PHOENIX-5255 Create Orchestrator for QueryServerCanaryTool in 
phoenix-queryserver project
d6f93a5 is described below

commit d6f93a542b31ca417af97751bf5d37a67af6963d
Author: Swaroopa Kadam 
AuthorDate: Sat May 18 13:15:18 2019 -0700

PHOENIX-5255 Create Orchestrator for QueryServerCanaryTool in 
phoenix-queryserver project
---
 pom.xml|  45 ++-
 queryserver-orchestrator/pom.xml   |  73 +++
 .../QueryServerCanaryOrchestrator.java | 141 +
 .../orchestrator/TestExecutorClient.java   | 129 +++
 .../queryserver/orchestrator/ToolWrapper.java  |  43 +++
 5 files changed, 429 insertions(+), 2 deletions(-)

diff --git a/pom.xml b/pom.xml
index 0f87d75..caff061 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1,4 +1,24 @@
 
+
 http://maven.apache.org/POM/4.0.0";
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
@@ -30,6 +50,7 @@
 load-balancer
 assembly
 phoenix-client
+queryserver-orchestrator
 
 
 
@@ -53,7 +74,7 @@
 1.4.0
 2.7.5
 2.12.0
-4.15.0-HBase-1.4-SNAPSHOT
+4.15.0-HBase-1.3-SNAPSHOT
 
 
 2.5.0
@@ -233,7 +254,7 @@
 
 org.apache.maven.plugins
 maven-shade-plugin
-2.4.3
+3.2.0
 
 
 
@@ -508,6 +529,16 @@
 
 
 org.apache.curator
+curator-recipes
+${curator.version}
+
+
+org.apache.curator
+curator-framework
+${curator.version}
+
+
+org.apache.curator
 curator-test
 ${curator.version}
 
@@ -571,6 +602,16 @@
 javax.servlet-api
 ${servlet.api.version}
 
+
+org.slf4j
+slf4j-api
+1.8.0-alpha2
+
+
+net.sourceforge.argparse4j
+argparse4j
+0.8.1
+
 
 
 
diff --git a/queryserver-orchestrator/pom.xml b/queryserver-orchestrator/pom.xml
new file mode 100644
index 000..de00fe9
--- /dev/null
+++ b/queryserver-orchestrator/pom.xml
@@ -0,0 +1,73 @@
+
+
+http://maven.apache.org/POM/4.0.0";
+ xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
+ xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd";>
+
+phoenix-queryserver
+org.apache.phoenix
+1.0.0-SNAPSHOT
+
+4.0.0
+
+queryserver-orchestrator
+
+
+
+org.apache.phoenix
+queryserver-client
+
+
+org.apache.calcite.avatica
+avatica-core
+
+
+org.slf4j
+slf4j-api
+
+
+net.sourceforge.argparse4j
+argparse4j
+
+
+org.apache.curator
+curator-framework
+
+
+org.apache.curator
+curator-recipes
+
+
+org.apache.phoenix
+phoenix-core
+
+
+org.mockito
+mockito-all
+
+
+org.mockito
+mockito-all
+
+
+
+
\ No newline at end of file
diff --git 
a/queryserver-orchestrator/src/main/java/org/apache/phoenix/queryserver/orchestrator/QueryServerCanaryOrchestrator.java
 
b/queryserver-orchestrator/src/main/java/org/apache/phoenix/queryserver/orchestrator/QueryServerCanaryOrchestrator.java
new file mode 100644
index 000..4279233
--- /dev/null
+++ 
b/queryserver-orchestrator/src/main/java/org/apache/phoenix/queryserver/orchestrator/QueryServerCanaryOrchestrator.java
@@ -0,0 +1,141 @@
+/**
+ *
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * U

[phoenix-queryserver] branch master updated: PHOENIX-5221 Phoenix Kerberos Integration tests failure on Redhat Linux

2019-05-24 Thread karanmehta93
This is an automated email from the ASF dual-hosted git repository.

karanmehta93 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-queryserver.git


The following commit(s) were added to refs/heads/master by this push:
 new e3d28be  PHOENIX-5221 Phoenix Kerberos Integration tests failure on 
Redhat Linux
e3d28be is described below

commit e3d28beef5a466c7562ef5a3d65119c1ed241d79
Author: m2je 
AuthorDate: Fri May 24 07:31:18 2019 -0700

PHOENIX-5221 Phoenix Kerberos Integration tests failure on Redhat Linux
---
 ...ryServerIT.java => AbstractKerberisedTest.java} | 167 ++
 .../HttpParamImpersonationQueryServerIT.java   | 239 +---
 .../phoenix/end2end/SecureQueryServerIT.java   | 244 +
 3 files changed, 73 insertions(+), 577 deletions(-)

diff --git 
a/queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerIT.java 
b/queryserver/src/it/java/org/apache/phoenix/end2end/AbstractKerberisedTest.java
similarity index 72%
copy from 
queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerIT.java
copy to 
queryserver/src/it/java/org/apache/phoenix/end2end/AbstractKerberisedTest.java
index c3ff885..8ed7ce6 100644
--- 
a/queryserver/src/it/java/org/apache/phoenix/end2end/SecureQueryServerIT.java
+++ 
b/queryserver/src/it/java/org/apache/phoenix/end2end/AbstractKerberisedTest.java
@@ -1,41 +1,16 @@
 /*
- * Licensed to the Apache Software Foundation (ASF) under one or more
- * contributor license agreements.  See the NOTICE file distributed with
- * this work for additional information regarding copyright ownership.
- * The ASF licenses this file to you under the Apache License, Version 2.0
- * (the "License"); you may not use this file except in compliance with
- * the License.  You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
+ * Licensed to the Apache Software Foundation (ASF) under one or more 
contributor license agreements. See the NOTICE
+ * file distributed with this work for additional information regarding 
copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the "License"); you may not 
use this file except in compliance with the
+ * License. You may obtain a copy of the License at 
http://www.apache.org/licenses/LICENSE-2.0 Unless required by
+ * applicable law or agreed to in writing, software distributed under the 
License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
See the License for the specific language
+ * governing permissions and limitations under the License.
  */
 package org.apache.phoenix.end2end;
 
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertFalse;
-import static org.junit.Assert.assertNotNull;
-import static org.junit.Assert.assertTrue;
-
-import java.io.File;
-import java.io.IOException;
-import java.lang.reflect.Field;
-import java.security.PrivilegedAction;
-import java.security.PrivilegedExceptionAction;
-import java.sql.DriverManager;
-import java.sql.ResultSet;
-import java.sql.Statement;
-import java.util.ArrayList;
-import java.util.List;
-import java.util.Map.Entry;
-import java.util.concurrent.ExecutorService;
-import java.util.concurrent.Executors;
-import java.util.concurrent.TimeUnit;
-
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Maps;
 import org.apache.commons.io.FileUtils;
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
@@ -60,41 +35,69 @@ import org.apache.phoenix.queryserver.client.ThinClientUtil;
 import org.apache.phoenix.queryserver.server.QueryServer;
 import org.apache.phoenix.util.InstanceResolver;
 import org.junit.AfterClass;
-import org.junit.BeforeClass;
-import org.junit.Test;
-import org.junit.experimental.categories.Category;
 
-import com.google.common.base.Preconditions;
-import com.google.common.collect.Maps;
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.net.InetAddress;
+import java.security.PrivilegedAction;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.TimeUnit;
 
-@Category(NeedsOwnMiniClusterTest.class)
-public class SecureQueryServerIT {
-private static final Log LOG = 
LogFactory.getLog(SecureQueryServerIT.class);
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.a

svn commit: r1826583 - in /phoenix: phoenix-docs/src/docsrc/help/ phoenix-docs/src/tools/org/h2/build/doc/ site/publish/ site/publish/language/

2018-03-12 Thread karanmehta93
Author: karanmehta93
Date: Mon Mar 12 18:06:17 2018
New Revision: 1826583

URL: http://svn.apache.org/viewvc?rev=1826583&view=rev
Log:
PHOENIX-4432 PHOENIX-672 (Grant/Revoke) documentation

Modified:
phoenix/phoenix-docs/src/docsrc/help/phoenix.csv
phoenix/phoenix-docs/src/tools/org/h2/build/doc/dictionary.txt
phoenix/site/publish/Phoenix-in-15-minutes-or-less.html
phoenix/site/publish/array_type.html
phoenix/site/publish/atomic_upsert.html
phoenix/site/publish/building.html
phoenix/site/publish/building_website.html
phoenix/site/publish/bulk_dataload.html
phoenix/site/publish/columnencoding.html
phoenix/site/publish/contributing.html
phoenix/site/publish/cursors.html
phoenix/site/publish/develop.html
phoenix/site/publish/download.html
phoenix/site/publish/dynamic_columns.html
phoenix/site/publish/explainplan.html
phoenix/site/publish/faq.html
phoenix/site/publish/flume.html
phoenix/site/publish/hive_storage_handler.html
phoenix/site/publish/index.html
phoenix/site/publish/installation.html
phoenix/site/publish/issues.html
phoenix/site/publish/joins.html
phoenix/site/publish/kafka.html
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/mailing_list.html
phoenix/site/publish/metrics.html
phoenix/site/publish/multi-tenancy.html
phoenix/site/publish/namspace_mapping.html
phoenix/site/publish/news.html
phoenix/site/publish/paged.html
phoenix/site/publish/performance.html
phoenix/site/publish/pherf.html
phoenix/site/publish/phoenix_mr.html
phoenix/site/publish/phoenix_on_emr.html
phoenix/site/publish/phoenix_orm.html
phoenix/site/publish/phoenix_python.html
phoenix/site/publish/phoenix_spark.html
phoenix/site/publish/phoenixcon.html
phoenix/site/publish/pig_integration.html
phoenix/site/publish/recent.html
phoenix/site/publish/release.html
phoenix/site/publish/release_notes.html
phoenix/site/publish/resources.html
phoenix/site/publish/roadmap.html
phoenix/site/publish/rowtimestamp.html
phoenix/site/publish/salted.html
phoenix/site/publish/secondary_indexing.html
phoenix/site/publish/sequences.html
phoenix/site/publish/server.html
phoenix/site/publish/skip_scan.html
phoenix/site/publish/source.html
phoenix/site/publish/subqueries.html
phoenix/site/publish/tablesample.html
phoenix/site/publish/team.html
phoenix/site/publish/tracing.html
phoenix/site/publish/transactions.html
phoenix/site/publish/tuning.html
phoenix/site/publish/tuning_guide.html
phoenix/site/publish/udf.html
phoenix/site/publish/update_statistics.html
phoenix/site/publish/upgrading.html
phoenix/site/publish/views.html
phoenix/site/publish/who_is_using.html

Modified: phoenix/phoenix-docs/src/docsrc/help/phoenix.csv
URL: 
http://svn.apache.org/viewvc/phoenix/phoenix-docs/src/docsrc/help/phoenix.csv?rev=1826583&r1=1826582&r2=1826583&view=diff
==
--- phoenix/phoenix-docs/src/docsrc/help/phoenix.csv (original)
+++ phoenix/phoenix-docs/src/docsrc/help/phoenix.csv Mon Mar 12 18:06:17 2018
@@ -371,6 +371,60 @@ DROP SCHEMA IF EXISTS my_schema
 DROP SCHEMA my_schema
 "
 
+"Commands","GRANT","
+GRANT {permissionString} [ON [SCHEMA schemaName] tableName] TO [GROUP] 
userString
+","
+Grant permissions at table, schema or user level. Permissions are managed by 
HBase in hbase:acl table, hence access controls need to be enabled. This 
feature will be available from Phoenix 4.14 version onwards.
+
+Possible permissions are R - Read, W - Write, X - Execute, C - Create and A - 
Admin.
+To enable/disable access controls, see 
https://hbase.apache.org/book.html#hbase.accesscontrol.configuration
+
+Permissions should be granted on base tables. It will be propagated to all its 
indexes and views.
+Group permissions are applicable to all users in the group and schema 
permissions are applicable to all tables with that schema.
+Grant statements without table/schema specified are assigned at GLOBAL level.
+
+Phoenix doesn't expose Execute('X') functionality to end users. However, it is 
required for mutable tables with secondary indexes.
+
+Important Note:
+
+Every user requires 'RX' permissions on all Phoenix SYSTEM tables in order to 
work correctly. Users also require 'RWX' permissions on SYSTEM.SEQUENCE table 
for using SEQUENCES.
+
+","
+GRANT 'RXC' TO 'User1'
+GRANT 'RWXC' TO GROUP 'Group1'
+GRANT 'A' ON Table1 TO 'User2'
+GRANT 'RWX' ON my_schema.my_table TO 'User2'
+GRANT 'A' ON SCHEMA my_schema TO 'User3'
+"
+
+"Commands&

phoenix git commit: PHOENIX-4528 PhoenixAccessController checks permissions only at table level when creating views

2018-01-17 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 f408f65aa -> c42d90aa3


PHOENIX-4528 PhoenixAccessController checks permissions only at table level 
when creating views


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c42d90aa
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c42d90aa
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c42d90aa

Branch: refs/heads/4.x-HBase-1.3
Commit: c42d90aa34adbee65840916fd2b7215218eb1666
Parents: f408f65
Author: Karan Mehta 
Authored: Sat Jan 13 17:19:22 2018 -0800
Committer: Karan Mehta 
Committed: Wed Jan 17 11:34:48 2018 -0800

--
 .../phoenix/end2end/BasePermissionsIT.java  |  4 +
 .../phoenix/end2end/ChangePermissionsIT.java| 26 +-
 .../coprocessor/PhoenixAccessController.java| 91 +---
 3 files changed, 88 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c42d90aa/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 9d7ef1b..d33d538 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -746,6 +746,10 @@ public class BasePermissionsIT extends BaseTest {
 }
 }
 
+String surroundWithDoubleQuotes(String input) {
+return "\"" + input + "\"";
+}
+
 void validateAccessDeniedException(AccessDeniedException ade) {
 String msg = ade.getMessage();
 assertTrue("Exception contained unexpected message: '" + msg + "'",

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c42d90aa/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
index 2bf7fe1..a30f01f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
@@ -145,7 +145,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create new table. Create indexes, views and view indexes on top of 
it. Verify the contents by querying it
@@ -236,7 +236,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create MultiTenant Table (View Index Table should be automatically 
created)
@@ -267,4 +267,26 @@ public class ChangePermissionsIT extends BasePermissionsIT 
{
 verifyAllowed(readMultiTenantTableWithIndex(VIEW1_TABLE_NAME, "o1"), 
regularUser2);
 verifyAllowed(readMultiTenantTableWithoutIndex(VIEW2_TABLE_NAME, 
"o2"), regularUser2);
 }
+
+/**
+ * Grant RX permissions on the schema to regularUser1,
+ * Creating view on a table with that schema by regularUser1 should be 
allowed
+ */
+@Test
+public void testCreateViewOnTableWithRXPermsOnSchema() throws Exception {
+
+startNewMiniCluster();
+grantSystemTableAccess(superUser1, regularUser1, regularUser2, 
regularUser3);
+
+if(isNamespaceMapped) {
+verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, SCHEMA_NAME, 
true), superUser1);
+} else {
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX"

phoenix git commit: PHOENIX-4528 PhoenixAccessController checks permissions only at table level when creating views

2018-01-17 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 55ac93272 -> 75a7a5d5e


PHOENIX-4528 PhoenixAccessController checks permissions only at table level 
when creating views


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/75a7a5d5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/75a7a5d5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/75a7a5d5

Branch: refs/heads/4.x-HBase-1.2
Commit: 75a7a5d5e0e1193c4278e9bab100f835d9327b09
Parents: 55ac932
Author: Karan Mehta 
Authored: Sat Jan 13 17:19:22 2018 -0800
Committer: Karan Mehta 
Committed: Wed Jan 17 14:54:17 2018 -0800

--
 .../phoenix/end2end/BasePermissionsIT.java  |  4 +
 .../phoenix/end2end/ChangePermissionsIT.java| 26 +-
 .../coprocessor/PhoenixAccessController.java| 91 +---
 3 files changed, 88 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/75a7a5d5/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 9d7ef1b..d33d538 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -746,6 +746,10 @@ public class BasePermissionsIT extends BaseTest {
 }
 }
 
+String surroundWithDoubleQuotes(String input) {
+return "\"" + input + "\"";
+}
+
 void validateAccessDeniedException(AccessDeniedException ade) {
 String msg = ade.getMessage();
 assertTrue("Exception contained unexpected message: '" + msg + "'",

http://git-wip-us.apache.org/repos/asf/phoenix/blob/75a7a5d5/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
index 2bf7fe1..a30f01f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
@@ -145,7 +145,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create new table. Create indexes, views and view indexes on top of 
it. Verify the contents by querying it
@@ -236,7 +236,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create MultiTenant Table (View Index Table should be automatically 
created)
@@ -267,4 +267,26 @@ public class ChangePermissionsIT extends BasePermissionsIT 
{
 verifyAllowed(readMultiTenantTableWithIndex(VIEW1_TABLE_NAME, "o1"), 
regularUser2);
 verifyAllowed(readMultiTenantTableWithoutIndex(VIEW2_TABLE_NAME, 
"o2"), regularUser2);
 }
+
+/**
+ * Grant RX permissions on the schema to regularUser1,
+ * Creating view on a table with that schema by regularUser1 should be 
allowed
+ */
+@Test
+public void testCreateViewOnTableWithRXPermsOnSchema() throws Exception {
+
+startNewMiniCluster();
+grantSystemTableAccess(superUser1, regularUser1, regularUser2, 
regularUser3);
+
+if(isNamespaceMapped) {
+verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, SCHEMA_NAME, 
true), superUser1);
+} else {
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX"

phoenix git commit: PHOENIX-4528 PhoenixAccessController checks permissions only at table level when creating views

2018-01-17 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 2d655cdc7 -> e3faa9549


PHOENIX-4528 PhoenixAccessController checks permissions only at table level 
when creating views


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e3faa954
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e3faa954
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e3faa954

Branch: refs/heads/master
Commit: e3faa954952fbe6f9ea5a9e792a5d275d9193a53
Parents: 2d655cd
Author: Karan Mehta 
Authored: Wed Jan 17 15:03:25 2018 -0800
Committer: Karan Mehta 
Committed: Wed Jan 17 15:03:25 2018 -0800

--
 .../phoenix/end2end/BasePermissionsIT.java  |  4 +
 .../phoenix/end2end/ChangePermissionsIT.java| 26 +-
 .../coprocessor/PhoenixAccessController.java| 92 +---
 3 files changed, 89 insertions(+), 33 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e3faa954/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 9d7ef1b..d33d538 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -746,6 +746,10 @@ public class BasePermissionsIT extends BaseTest {
 }
 }
 
+String surroundWithDoubleQuotes(String input) {
+return "\"" + input + "\"";
+}
+
 void validateAccessDeniedException(AccessDeniedException ade) {
 String msg = ade.getMessage();
 assertTrue("Exception contained unexpected message: '" + msg + "'",

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e3faa954/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
index 2bf7fe1..a30f01f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ChangePermissionsIT.java
@@ -145,7 +145,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create new table. Create indexes, views and view indexes on top of 
it. Verify the contents by querying it
@@ -236,7 +236,7 @@ public class ChangePermissionsIT extends BasePermissionsIT {
 verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
 verifyAllowed(grantPermissions("C", regularUser1, SCHEMA_NAME, 
true), superUser1);
 } else {
-verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
 }
 
 // Create MultiTenant Table (View Index Table should be automatically 
created)
@@ -267,4 +267,26 @@ public class ChangePermissionsIT extends BasePermissionsIT 
{
 verifyAllowed(readMultiTenantTableWithIndex(VIEW1_TABLE_NAME, "o1"), 
regularUser2);
 verifyAllowed(readMultiTenantTableWithoutIndex(VIEW2_TABLE_NAME, 
"o2"), regularUser2);
 }
+
+/**
+ * Grant RX permissions on the schema to regularUser1,
+ * Creating view on a table with that schema by regularUser1 should be 
allowed
+ */
+@Test
+public void testCreateViewOnTableWithRXPermsOnSchema() throws Exception {
+
+startNewMiniCluster();
+grantSystemTableAccess(superUser1, regularUser1, regularUser2, 
regularUser3);
+
+if(isNamespaceMapped) {
+verifyAllowed(createSchema(SCHEMA_NAME), superUser1);
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, SCHEMA_NAME, 
true), superUser1);
+} else {
+verifyAllowed(createTable(FULL_TABLE_NAME), superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 4b76b210a -> 65f91a11d


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/65f91a11
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/65f91a11
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/65f91a11

Branch: refs/heads/master
Commit: 65f91a11d1bc2cc4d798ced121f47200bf0fc36c
Parents: 4b76b21
Author: Karan Mehta 
Authored: Tue Jan 23 16:07:24 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 16:07:24 2018 -0800

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/65f91a11/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 2871809..9f16cc1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/5.x-HBase-2.0 844cb123b -> 668c36ca6


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/668c36ca
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/668c36ca
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/668c36ca

Branch: refs/heads/5.x-HBase-2.0
Commit: 668c36ca6f7b74b793bc97daf0f72af72d68
Parents: 844cb12
Author: Karan Mehta 
Authored: Tue Jan 23 16:07:24 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 19:48:40 2018 -0800

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/668c36ca/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index ede6ed9..455157e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.mapreduce.RegionSizeCalculator;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 6fb41c9b6 -> a37a30a6c


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a37a30a6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a37a30a6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a37a30a6

Branch: refs/heads/4.x-HBase-1.3
Commit: a37a30a6cc614c9264c78ad0bd08417e7450e3e4
Parents: 6fb41c9
Author: Karan Mehta 
Authored: Tue Jan 23 16:07:24 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 23:09:18 2018 -0800

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a37a30a6/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 2871809..9f16cc1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 24547536b -> 77f7506c3


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/77f7506c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/77f7506c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/77f7506c

Branch: refs/heads/4.x-HBase-1.2
Commit: 77f7506c3905c60e2a71d9a9f776631e4a0bcc7c
Parents: 2454753
Author: Karan Mehta 
Authored: Tue Jan 23 16:07:24 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 23:14:23 2018 -0800

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/77f7506c/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 2871809..9f16cc1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 64c5cb649 -> aa9c7d6e4


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/aa9c7d6e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/aa9c7d6e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/aa9c7d6e

Branch: refs/heads/4.x-HBase-1.1
Commit: aa9c7d6e4e6306e01b559bd7fc10e5d09a7172c5
Parents: 64c5cb6
Author: Karan Mehta 
Authored: Tue Jan 23 16:07:24 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 23:19:39 2018 -0800

--
 .../org/apache/phoenix/mapreduce/PhoenixInputFormat.java  | 10 +-
 1 file changed, 5 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/aa9c7d6e/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 2871809..9f16cc1 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -30,7 +30,6 @@ import org.apache.commons.logging.LogFactory;
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.RegionLocator;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -48,6 +47,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -95,13 +95,13 @@ public class PhoenixInputFormat 
extends InputFormat psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
@@ -131,8 +131,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs

2018-01-23 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-0.98 47e8262d4 -> 42a21a950


PHOENIX-4489 HBase Connection leak in Phoenix MR Jobs


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/42a21a95
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/42a21a95
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/42a21a95

Branch: refs/heads/4.x-HBase-0.98
Commit: 42a21a95040b54eb63affaf75d12c0436679267b
Parents: 47e8262
Author: Karan Mehta 
Authored: Tue Jan 23 23:35:38 2018 -0800
Committer: Karan Mehta 
Committed: Tue Jan 23 23:35:38 2018 -0800

--
 .../apache/phoenix/mapreduce/PhoenixInputFormat.java| 12 +++-
 1 file changed, 7 insertions(+), 5 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/42a21a95/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
index 4e3b816..ad77f7f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
@@ -31,7 +31,6 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HRegionLocation;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HConnection;
-import org.apache.hadoop.hbase.client.HConnectionManager;
 import org.apache.hadoop.hbase.client.HTable;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -49,6 +48,7 @@ import 
org.apache.phoenix.iterate.MapReduceParallelScanGrouper;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.mapreduce.util.ConnectionUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
+import org.apache.phoenix.query.HBaseFactoryProvider;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.util.PhoenixRuntime;
 
@@ -96,14 +96,16 @@ public class PhoenixInputFormat 
extends InputFormat psplits;
+try (HConnection connection =
+ 
HBaseFactoryProvider.getHConnectionFactory().createConnection(config)) {
 String tableName = qplan
 .getTableRef().getTable().getPhysicalName().toString();
 HTable table = new HTable(config, tableName);
 RegionSizeCalculator sizeCalculator = new RegionSizeCalculator(table);
 
 
-final List psplits = 
Lists.newArrayListWithExpectedSize(splits.size());
+psplits = Lists.newArrayListWithExpectedSize(splits.size());
 for (List scans : qplan.getScans()) {
 // Get the region location
 HRegionLocation location = connection.getRegionLocation(
@@ -133,8 +135,7 @@ public class PhoenixInputFormat 
extends InputFormat 
extends InputFormat

phoenix git commit: PHOENIX-4805 Move Avatica version to 1.12 for PQS

2018-07-12 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 288322678 -> da2743027


PHOENIX-4805 Move Avatica version to 1.12 for PQS


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/da274302
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/da274302
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/da274302

Branch: refs/heads/master
Commit: da2743027582de3f4b6001be8ac1eadf4f008174
Parents: 2883226
Author: Karan Mehta 
Authored: Fri Jul 6 13:35:52 2018 -0700
Committer: Karan Mehta 
Committed: Thu Jul 12 09:24:24 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/da274302/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 13f137e..075e736 100644
--- a/pom.xml
+++ b/pom.xml
@@ -98,7 +98,7 @@
 
 1.6
 2.1.2
-1.10.0
+1.12.0
 8.1.7.v20120910
 0.14.0-incubating
 2.0.2



phoenix git commit: PHOENIX-4805 Move Avatica version to 1.12 for PQS

2018-07-12 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 ad8867522 -> c729db032


PHOENIX-4805 Move Avatica version to 1.12 for PQS


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c729db03
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c729db03
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c729db03

Branch: refs/heads/4.x-HBase-1.3
Commit: c729db032d299f26370e8f9c366fccd3bcfc0973
Parents: ad88675
Author: Karan Mehta 
Authored: Fri Jul 6 13:35:52 2018 -0700
Committer: Karan Mehta 
Committed: Thu Jul 12 09:46:56 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c729db03/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 859157d..7e74e63 100644
--- a/pom.xml
+++ b/pom.xml
@@ -98,7 +98,7 @@
 
 1.6
 2.1.2
-1.10.0
+1.12.0
 8.1.7.v20120910
 0.14.0-incubating
 2.0.2



phoenix git commit: PHOENIX-4805 Move Avatica version to 1.12 for PQS

2018-07-12 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 2920dfa19 -> 17b24a55c


PHOENIX-4805 Move Avatica version to 1.12 for PQS


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/17b24a55
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/17b24a55
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/17b24a55

Branch: refs/heads/4.x-HBase-1.2
Commit: 17b24a55c7215e83953216f1608d6713db7694c4
Parents: 2920dfa
Author: Karan Mehta 
Authored: Fri Jul 6 13:35:52 2018 -0700
Committer: Karan Mehta 
Committed: Thu Jul 12 09:48:04 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/17b24a55/pom.xml
--
diff --git a/pom.xml b/pom.xml
index 884cb51..e8d0a51 100644
--- a/pom.xml
+++ b/pom.xml
@@ -98,7 +98,7 @@
 
 1.6
 2.1.2
-1.10.0
+1.12.0
 8.1.7.v20120910
 0.14.0-incubating
 2.0.2



phoenix git commit: PHOENIX-4805 Move Avatica version to 1.12 for PQS

2018-07-12 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.1 aa3ee877d -> 0c3aa09fc


PHOENIX-4805 Move Avatica version to 1.12 for PQS


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0c3aa09f
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0c3aa09f
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0c3aa09f

Branch: refs/heads/4.x-HBase-1.1
Commit: 0c3aa09fc7838dc61253c55817a7383e8142eecd
Parents: aa3ee87
Author: Karan Mehta 
Authored: Fri Jul 6 13:35:52 2018 -0700
Committer: Karan Mehta 
Committed: Thu Jul 12 10:26:46 2018 -0700

--
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0c3aa09f/pom.xml
--
diff --git a/pom.xml b/pom.xml
index bf8b7e3..9252dee 100644
--- a/pom.xml
+++ b/pom.xml
@@ -98,7 +98,7 @@
 
 1.6
 2.1.2
-1.10.0
+1.12.0
 8.1.7.v20120910
 0.14.0-incubating
 2.0.2



phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped

2018-10-26 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 4eba144c9 -> 9f224a18b


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9f224a18
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9f224a18
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9f224a18

Branch: refs/heads/master
Commit: 9f224a18b59fdcc323d661af3b307411c84c4a4c
Parents: 4eba144
Author: Kadir 
Authored: Wed Sep 26 23:32:31 2018 -0700
Committer: Karan Mehta 
Committed: Fri Oct 26 14:34:23 2018 -0700

--
 .../phoenix/end2end/BasePermissionsIT.java  |   4 +-
 .../phoenix/end2end/DropTableWithViewsIT.java   | 151 ++
 .../end2end/QueryDatabaseMetaDataIT.java|   4 +
 .../end2end/TenantSpecificTablesDDLIT.java  |   4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  46 ++-
 .../phoenix/coprocessor/TaskRegionObserver.java | 298 +++
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   9 +-
 .../query/ConnectionQueryServicesImpl.java  |  20 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   9 +
 .../apache/phoenix/query/QueryConstants.java|  62 ++--
 .../org/apache/phoenix/query/QueryServices.java |   6 +
 .../phoenix/query/QueryServicesOptions.java |   4 +
 .../java/org/apache/phoenix/schema/PTable.java  |  31 +-
 .../phoenix/schema/stats/StatisticsUtil.java|   2 +
 .../org/apache/phoenix/util/SchemaUtil.java |  10 +
 .../java/org/apache/phoenix/query/BaseTest.java |   1 +
 16 files changed, 620 insertions(+), 41 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f224a18/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 515de47..81a68b4 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -429,7 +429,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP TABLE IF EXISTS " + 
tableName));
+assertFalse(stmt.execute(String.format("DROP TABLE IF 
EXISTS %s CASCADE", tableName)));
 }
 return null;
 }
@@ -654,7 +654,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP VIEW " + viewName));
+assertFalse(stmt.execute(String.format("DROP VIEW %s 
CASCADE", viewName)));
 }
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9f224a18/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
new file mode 100644
index 000..9502218
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped

2018-10-26 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 b6361c4ae -> 90afa2dd9


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/90afa2dd
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/90afa2dd
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/90afa2dd

Branch: refs/heads/4.x-HBase-1.4
Commit: 90afa2dd9c2cb0a7cefb154de7f3db1860a58d69
Parents: b6361c4
Author: Kadir 
Authored: Wed Sep 26 23:32:31 2018 -0700
Committer: Karan Mehta 
Committed: Fri Oct 26 14:39:04 2018 -0700

--
 .../phoenix/end2end/BasePermissionsIT.java  |   4 +-
 .../phoenix/end2end/DropTableWithViewsIT.java   | 151 ++
 .../end2end/QueryDatabaseMetaDataIT.java|   4 +
 .../end2end/TenantSpecificTablesDDLIT.java  |   4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  46 ++-
 .../phoenix/coprocessor/TaskRegionObserver.java | 292 +++
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   9 +-
 .../query/ConnectionQueryServicesImpl.java  |  20 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   9 +
 .../apache/phoenix/query/QueryConstants.java|  17 +-
 .../org/apache/phoenix/query/QueryServices.java |   6 +
 .../phoenix/query/QueryServicesOptions.java |   4 +
 .../java/org/apache/phoenix/schema/PTable.java  |  31 +-
 .../phoenix/schema/stats/StatisticsUtil.java|   2 +
 .../org/apache/phoenix/util/SchemaUtil.java |  10 +
 .../java/org/apache/phoenix/query/BaseTest.java |   1 +
 16 files changed, 589 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/90afa2dd/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 88a942e..932ce9f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -428,7 +428,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP TABLE IF EXISTS " + 
tableName));
+assertFalse(stmt.execute(String.format("DROP TABLE IF 
EXISTS %s CASCADE", tableName)));
 }
 return null;
 }
@@ -653,7 +653,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP VIEW " + viewName));
+assertFalse(stmt.execute(String.format("DROP VIEW %s 
CASCADE", viewName)));
 }
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/90afa2dd/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
new file mode 100644
index 000..9502218
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped

2018-10-26 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 a48e42e7f -> 3903ad768


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3903ad76
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3903ad76
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3903ad76

Branch: refs/heads/4.x-HBase-1.3
Commit: 3903ad768496d0a6517298a8f196863b87290e72
Parents: a48e42e
Author: Kadir 
Authored: Wed Sep 26 23:32:31 2018 -0700
Committer: Karan Mehta 
Committed: Fri Oct 26 14:40:48 2018 -0700

--
 .../phoenix/end2end/BasePermissionsIT.java  |   4 +-
 .../phoenix/end2end/DropTableWithViewsIT.java   | 151 ++
 .../end2end/QueryDatabaseMetaDataIT.java|   4 +
 .../end2end/TenantSpecificTablesDDLIT.java  |   4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  46 ++-
 .../phoenix/coprocessor/TaskRegionObserver.java | 292 +++
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   9 +-
 .../query/ConnectionQueryServicesImpl.java  |  20 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   9 +
 .../apache/phoenix/query/QueryConstants.java|  17 +-
 .../org/apache/phoenix/query/QueryServices.java |   6 +
 .../phoenix/query/QueryServicesOptions.java |   4 +
 .../java/org/apache/phoenix/schema/PTable.java  |  31 +-
 .../phoenix/schema/stats/StatisticsUtil.java|   2 +
 .../org/apache/phoenix/util/SchemaUtil.java |  10 +
 .../java/org/apache/phoenix/query/BaseTest.java |   1 +
 16 files changed, 589 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3903ad76/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 88a942e..932ce9f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -428,7 +428,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP TABLE IF EXISTS " + 
tableName));
+assertFalse(stmt.execute(String.format("DROP TABLE IF 
EXISTS %s CASCADE", tableName)));
 }
 return null;
 }
@@ -653,7 +653,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP VIEW " + viewName));
+assertFalse(stmt.execute(String.format("DROP VIEW %s 
CASCADE", viewName)));
 }
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3903ad76/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
new file mode 100644
index 000..9502218
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped

2018-10-26 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 2fbf65a05 -> 49da7216d


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/49da7216
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/49da7216
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/49da7216

Branch: refs/heads/4.x-HBase-1.2
Commit: 49da7216da923452b91faa23733f6851ad6317f6
Parents: 2fbf65a
Author: Kadir 
Authored: Wed Sep 26 23:32:31 2018 -0700
Committer: Karan Mehta 
Committed: Fri Oct 26 14:41:14 2018 -0700

--
 .../phoenix/end2end/BasePermissionsIT.java  |   4 +-
 .../phoenix/end2end/DropTableWithViewsIT.java   | 151 ++
 .../end2end/QueryDatabaseMetaDataIT.java|   4 +
 .../end2end/TenantSpecificTablesDDLIT.java  |   4 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  46 ++-
 .../phoenix/coprocessor/TaskRegionObserver.java | 292 +++
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   9 +-
 .../query/ConnectionQueryServicesImpl.java  |  20 +-
 .../query/ConnectionlessQueryServicesImpl.java  |   9 +
 .../apache/phoenix/query/QueryConstants.java|  17 +-
 .../org/apache/phoenix/query/QueryServices.java |   6 +
 .../phoenix/query/QueryServicesOptions.java |   4 +
 .../java/org/apache/phoenix/schema/PTable.java  |  31 +-
 .../phoenix/schema/stats/StatisticsUtil.java|   2 +
 .../org/apache/phoenix/util/SchemaUtil.java |  10 +
 .../java/org/apache/phoenix/query/BaseTest.java |   1 +
 16 files changed, 589 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/49da7216/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
index 88a942e..932ce9f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java
@@ -428,7 +428,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP TABLE IF EXISTS " + 
tableName));
+assertFalse(stmt.execute(String.format("DROP TABLE IF 
EXISTS %s CASCADE", tableName)));
 }
 return null;
 }
@@ -653,7 +653,7 @@ public class BasePermissionsIT extends BaseTest {
 @Override
 public Object run() throws Exception {
 try (Connection conn = getConnection(); Statement stmt = 
conn.createStatement();) {
-assertFalse(stmt.execute("DROP VIEW " + viewName));
+assertFalse(stmt.execute(String.format("DROP VIEW %s 
CASCADE", viewName)));
 }
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/49da7216/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
new file mode 100644
index 000..9502218
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -0,0 +1,151 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.end2end;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped (addendum)

2018-10-30 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 76ffd8af1 -> fc550666d


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/fc550666
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/fc550666
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/fc550666

Branch: refs/heads/master
Commit: fc550666d871d2d472bdbb0496650373bed4edad
Parents: 76ffd8a
Author: Kadir 
Authored: Mon Oct 29 21:14:33 2018 -0700
Committer: Karan Mehta 
Committed: Tue Oct 30 18:30:55 2018 -0700

--
 .../end2end/MigrateSystemTablesToSystemNamespaceIT.java   | 4 ++--
 .../phoenix/end2end/SystemCatalogCreationOnConnectionIT.java  | 4 ++--
 .../org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java   | 5 -
 .../org/apache/phoenix/coprocessor/TaskRegionObserver.java| 7 +--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java | 2 +-
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 2 ++
 .../java/org/apache/phoenix/query/QueryServicesOptions.java   | 3 ++-
 7 files changed, 18 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc550666/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
index ffac4d6..b6f061e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
@@ -62,10 +62,10 @@ public class MigrateSystemTablesToSystemNamespaceIT extends 
BaseTest {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
 "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
-"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
 Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK", 
"SYSTEM:TASK"));
 private static final String SCHEMA_NAME = "MIGRATETEST";
 private static final String TABLE_NAME =
 SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc550666/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index aa2d971..a1685c44 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -72,11 +72,11 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
   "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.FUNCTION",
-  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
   Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK", "SYSTEM:TASK"));
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/fc550666/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
index 5c016f6..07240f1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
+++ 
b/phoenix-core/src

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped (addendum)

2018-10-30 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 36c21cab1 -> 214f72a56


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/214f72a5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/214f72a5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/214f72a5

Branch: refs/heads/4.x-HBase-1.4
Commit: 214f72a56a7d0dc87ed7f9442927f37218a98ff5
Parents: 36c21ca
Author: Kadir 
Authored: Tue Oct 30 17:47:26 2018 -0700
Committer: Karan Mehta 
Committed: Tue Oct 30 18:32:45 2018 -0700

--
 .../end2end/MigrateSystemTablesToSystemNamespaceIT.java   | 4 ++--
 .../phoenix/end2end/SystemCatalogCreationOnConnectionIT.java  | 4 ++--
 .../org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java   | 4 +++-
 .../org/apache/phoenix/coprocessor/TaskRegionObserver.java| 7 +--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java | 2 +-
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 2 ++
 .../java/org/apache/phoenix/query/QueryServicesOptions.java   | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/214f72a5/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
index ffac4d6..b6f061e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
@@ -62,10 +62,10 @@ public class MigrateSystemTablesToSystemNamespaceIT extends 
BaseTest {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
 "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
-"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
 Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK", 
"SYSTEM:TASK"));
 private static final String SCHEMA_NAME = "MIGRATETEST";
 private static final String TABLE_NAME =
 SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/214f72a5/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index aa2d971..a1685c44 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -72,11 +72,11 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
   "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.FUNCTION",
-  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
   Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK", "SYSTEM:TASK"));
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/214f72a5/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
index 5c016f6..9c2b763 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
+++ 
b/pho

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped (addendum)

2018-10-30 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 647887947 -> c936d8151


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c936d815
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c936d815
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c936d815

Branch: refs/heads/4.x-HBase-1.3
Commit: c936d815119b73dc927a7b3bb6a2b577b4a2f91c
Parents: 6478879
Author: Kadir 
Authored: Tue Oct 30 17:47:26 2018 -0700
Committer: Karan Mehta 
Committed: Tue Oct 30 18:33:10 2018 -0700

--
 .../end2end/MigrateSystemTablesToSystemNamespaceIT.java   | 4 ++--
 .../phoenix/end2end/SystemCatalogCreationOnConnectionIT.java  | 4 ++--
 .../org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java   | 4 +++-
 .../org/apache/phoenix/coprocessor/TaskRegionObserver.java| 7 +--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java | 2 +-
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 2 ++
 .../java/org/apache/phoenix/query/QueryServicesOptions.java   | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c936d815/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
index ffac4d6..b6f061e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
@@ -62,10 +62,10 @@ public class MigrateSystemTablesToSystemNamespaceIT extends 
BaseTest {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
 "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
-"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
 Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK", 
"SYSTEM:TASK"));
 private static final String SCHEMA_NAME = "MIGRATETEST";
 private static final String TABLE_NAME =
 SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c936d815/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index aa2d971..a1685c44 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -72,11 +72,11 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
   "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.FUNCTION",
-  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
   Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK", "SYSTEM:TASK"));
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/c936d815/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
index 5c016f6..9c2b763 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
+++ 
b/pho

phoenix git commit: PHOENIX-4764 Cleanup metadata of child views for a base table that has been dropped (addendum)

2018-10-30 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 705cfbdfc -> ceff81708


PHOENIX-4764 Cleanup metadata of child views for a base table that has been 
dropped (addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ceff8170
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ceff8170
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ceff8170

Branch: refs/heads/4.x-HBase-1.2
Commit: ceff8170860dc4381342e10a0fcbad9ec6bd0f8b
Parents: 705cfbd
Author: Kadir 
Authored: Tue Oct 30 17:47:26 2018 -0700
Committer: Karan Mehta 
Committed: Tue Oct 30 18:33:33 2018 -0700

--
 .../end2end/MigrateSystemTablesToSystemNamespaceIT.java   | 4 ++--
 .../phoenix/end2end/SystemCatalogCreationOnConnectionIT.java  | 4 ++--
 .../org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java   | 4 +++-
 .../org/apache/phoenix/coprocessor/TaskRegionObserver.java| 7 +--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java | 2 +-
 .../src/main/java/org/apache/phoenix/query/QueryServices.java | 2 ++
 .../java/org/apache/phoenix/query/QueryServicesOptions.java   | 3 ++-
 7 files changed, 17 insertions(+), 9 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ceff8170/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
index ffac4d6..b6f061e 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MigrateSystemTablesToSystemNamespaceIT.java
@@ -62,10 +62,10 @@ public class MigrateSystemTablesToSystemNamespaceIT extends 
BaseTest {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
 "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", 
"SYSTEM.FUNCTION",
-"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+"SYSTEM.MUTEX","SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
 Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX","SYSTEM:LOG", "SYSTEM:CHILD_LINK", 
"SYSTEM:TASK"));
 private static final String SCHEMA_NAME = "MIGRATETEST";
 private static final String TABLE_NAME =
 SCHEMA_NAME + "." + 
MigrateSystemTablesToSystemNamespaceIT.class.getSimpleName().toUpperCase();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ceff8170/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index aa2d971..a1685c44 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -72,11 +72,11 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static final Set PHOENIX_SYSTEM_TABLES = new 
HashSet<>(Arrays.asList(
   "SYSTEM.CATALOG", "SYSTEM.SEQUENCE", "SYSTEM.STATS", "SYSTEM.FUNCTION",
-  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK"));
+  "SYSTEM.MUTEX", "SYSTEM.LOG", "SYSTEM.CHILD_LINK", "SYSTEM.TASK"));
 
 private static final Set PHOENIX_NAMESPACE_MAPPED_SYSTEM_TABLES = 
new HashSet<>(
   Arrays.asList("SYSTEM:CATALOG", "SYSTEM:SEQUENCE", "SYSTEM:STATS", 
"SYSTEM:FUNCTION",
-"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK"));
+"SYSTEM:MUTEX", "SYSTEM:LOG", "SYSTEM:CHILD_LINK", "SYSTEM:TASK"));
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ceff8170/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
index 5c016f6..9c2b763 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/BasePhoenixMetricsIT.java
+++ 
b/pho

phoenix git commit: PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows

2018-11-01 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 bbd31a9e0 -> 8ccf69f00


PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8ccf69f0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8ccf69f0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8ccf69f0

Branch: refs/heads/4.x-HBase-1.4
Commit: 8ccf69f00d318873ddf0e77e73bf8e5045fdb3c4
Parents: bbd31a9
Author: Karan Mehta 
Authored: Thu Nov 1 17:15:26 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 1 17:49:55 2018 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 122 +++
 .../iterate/MapReduceParallelScanGrouper.java   |  32 -
 .../iterate/TableSnapshotResultIterator.java|  28 +++--
 .../java/org/apache/phoenix/query/BaseTest.java |  14 +--
 4 files changed, 122 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8ccf69f0/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index cae91a3..e35e159 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -36,6 +36,7 @@ import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -49,12 +50,18 @@ import 
org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import com.google.common.collect.Maps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
+
+  private static final Logger logger = 
LoggerFactory.getLogger(TableSnapshotReadsMapReduceIT.class);
+
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -66,6 +73,9 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   private static List> result;
   private long timestamp;
   private String tableName;
+  private Job job;
+  private Path tmpDir;
+  private Configuration conf;
 
   @BeforeClass
   public static void doSetup() throws Exception {
@@ -73,8 +83,8 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
   }
 
-  @Test
-  public void testMapReduceSnapshots() throws Exception {
+  @Before
+  public void before() throws SQLException, IOException {
 // create table
 Connection conn = DriverManager.getConnection(getUrl());
 tableName = generateUniqueName();
@@ -82,58 +92,43 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.commit();
 
 // configure Phoenix M/R job to read snapshot
-final Configuration conf = getUtility().getConfiguration();
-Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getRandomDir();
+conf = getUtility().getConfiguration();
+job = Job.getInstance(conf);
+tmpDir = getUtility().getRandomDir();
+  }
 
-
PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,SNAPSHOT_NAME,tableName,tmpDir,
 null, FIELD1, FIELD2, FIELD3);
+  @Test
+  public void testMapReduceSnapshots() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, false);
+  }
 
-// configure and test job
-configureJob(job, tableName, null, null);
+  @Test
+  public void testMapReduceSnapshotsMultiRegion() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, true);
   }
 
   @Test
   public void testMapReduceSnapshotsWithCondition() throws Exception {
-// create table
-Connection conn = DriverManager.getConnection(getUrl());
-tableName = generateUnique

phoenix git commit: PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows

2018-11-01 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 c3ed4f844 -> 1b2a3d5c7


PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1b2a3d5c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1b2a3d5c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1b2a3d5c

Branch: refs/heads/4.x-HBase-1.3
Commit: 1b2a3d5c7f25864158cd5062f2cd58d834fb5e1f
Parents: c3ed4f8
Author: Karan Mehta 
Authored: Thu Nov 1 17:15:26 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 1 17:54:37 2018 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 122 +++
 .../iterate/MapReduceParallelScanGrouper.java   |  32 -
 .../iterate/TableSnapshotResultIterator.java|  28 +++--
 .../java/org/apache/phoenix/query/BaseTest.java |  14 +--
 4 files changed, 122 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1b2a3d5c/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index cae91a3..e35e159 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -36,6 +36,7 @@ import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -49,12 +50,18 @@ import 
org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import com.google.common.collect.Maps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
+
+  private static final Logger logger = 
LoggerFactory.getLogger(TableSnapshotReadsMapReduceIT.class);
+
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -66,6 +73,9 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   private static List> result;
   private long timestamp;
   private String tableName;
+  private Job job;
+  private Path tmpDir;
+  private Configuration conf;
 
   @BeforeClass
   public static void doSetup() throws Exception {
@@ -73,8 +83,8 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
   }
 
-  @Test
-  public void testMapReduceSnapshots() throws Exception {
+  @Before
+  public void before() throws SQLException, IOException {
 // create table
 Connection conn = DriverManager.getConnection(getUrl());
 tableName = generateUniqueName();
@@ -82,58 +92,43 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.commit();
 
 // configure Phoenix M/R job to read snapshot
-final Configuration conf = getUtility().getConfiguration();
-Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getRandomDir();
+conf = getUtility().getConfiguration();
+job = Job.getInstance(conf);
+tmpDir = getUtility().getRandomDir();
+  }
 
-
PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,SNAPSHOT_NAME,tableName,tmpDir,
 null, FIELD1, FIELD2, FIELD3);
+  @Test
+  public void testMapReduceSnapshots() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, false);
+  }
 
-// configure and test job
-configureJob(job, tableName, null, null);
+  @Test
+  public void testMapReduceSnapshotsMultiRegion() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, true);
   }
 
   @Test
   public void testMapReduceSnapshotsWithCondition() throws Exception {
-// create table
-Connection conn = DriverManager.getConnection(getUrl());
-tableName = generateUnique

phoenix git commit: PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows

2018-11-01 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 51cfd7d55 -> 0c3f43384


PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/0c3f4338
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/0c3f4338
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/0c3f4338

Branch: refs/heads/4.x-HBase-1.2
Commit: 0c3f433842291870d9b0afa4745d57caef16ad3d
Parents: 51cfd7d
Author: Karan Mehta 
Authored: Thu Nov 1 17:15:26 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 1 17:55:15 2018 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 122 +++
 .../iterate/MapReduceParallelScanGrouper.java   |  32 -
 .../iterate/TableSnapshotResultIterator.java|  28 +++--
 .../java/org/apache/phoenix/query/BaseTest.java |  14 +--
 4 files changed, 122 insertions(+), 74 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/0c3f4338/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index cae91a3..e35e159 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -36,6 +36,7 @@ import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -49,12 +50,18 @@ import 
org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import com.google.common.collect.Maps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
+
+  private static final Logger logger = 
LoggerFactory.getLogger(TableSnapshotReadsMapReduceIT.class);
+
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -66,6 +73,9 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   private static List> result;
   private long timestamp;
   private String tableName;
+  private Job job;
+  private Path tmpDir;
+  private Configuration conf;
 
   @BeforeClass
   public static void doSetup() throws Exception {
@@ -73,8 +83,8 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
   }
 
-  @Test
-  public void testMapReduceSnapshots() throws Exception {
+  @Before
+  public void before() throws SQLException, IOException {
 // create table
 Connection conn = DriverManager.getConnection(getUrl());
 tableName = generateUniqueName();
@@ -82,58 +92,43 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.commit();
 
 // configure Phoenix M/R job to read snapshot
-final Configuration conf = getUtility().getConfiguration();
-Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getRandomDir();
+conf = getUtility().getConfiguration();
+job = Job.getInstance(conf);
+tmpDir = getUtility().getRandomDir();
+  }
 
-
PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,SNAPSHOT_NAME,tableName,tmpDir,
 null, FIELD1, FIELD2, FIELD3);
+  @Test
+  public void testMapReduceSnapshots() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, false);
+  }
 
-// configure and test job
-configureJob(job, tableName, null, null);
+  @Test
+  public void testMapReduceSnapshotsMultiRegion() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, true);
   }
 
   @Test
   public void testMapReduceSnapshotsWithCondition() throws Exception {
-// create table
-Connection conn = DriverManager.getConnection(getUrl());
-tableName = generateUnique

phoenix git commit: PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows

2018-11-01 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 59a7dd138 -> c2d33ed38


PHOENIX-4997 Phoenix MR on snapshots can produce duplicate rows


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c2d33ed3
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c2d33ed3
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c2d33ed3

Branch: refs/heads/master
Commit: c2d33ed384467fea4655e6f49ce43834f3886409
Parents: 59a7dd1
Author: Karan Mehta 
Authored: Thu Nov 1 17:15:26 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 1 17:56:25 2018 -0700

--
 .../end2end/TableSnapshotReadsMapReduceIT.java  | 123 +++
 .../iterate/MapReduceParallelScanGrouper.java   |  34 -
 .../iterate/TableSnapshotResultIterator.java|  30 +++--
 .../java/org/apache/phoenix/query/BaseTest.java |  12 +-
 4 files changed, 126 insertions(+), 73 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c2d33ed3/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
index fcf89a0..4aaeef2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TableSnapshotReadsMapReduceIT.java
@@ -36,6 +36,7 @@ import java.util.UUID;
 
 import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.HRegionInfo;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.SnapshotDescription;
@@ -47,14 +48,21 @@ import 
org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.mapreduce.index.PhoenixIndexDBWritable;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
+import org.apache.phoenix.query.BaseTest;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.Assert;
+import org.junit.Before;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import com.google.common.collect.Maps;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 public class TableSnapshotReadsMapReduceIT extends BaseUniqueNamesOwnClusterIT 
{
+
+  private static final Logger logger = 
LoggerFactory.getLogger(TableSnapshotReadsMapReduceIT.class);
+
   private final static String SNAPSHOT_NAME = "FOO";
   private static final String FIELD1 = "FIELD1";
   private static final String FIELD2 = "FIELD2";
@@ -66,6 +74,9 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   private static List> result;
   private long timestamp;
   private String tableName;
+  private Job job;
+  private Path tmpDir;
+  private Configuration conf;
 
   @BeforeClass
   public static void doSetup() throws Exception {
@@ -73,8 +84,8 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
   setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
   }
 
-  @Test
-  public void testMapReduceSnapshots() throws Exception {
+  @Before
+  public void before() throws SQLException, IOException {
 // create table
 Connection conn = DriverManager.getConnection(getUrl());
 tableName = generateUniqueName();
@@ -82,58 +93,43 @@ public class TableSnapshotReadsMapReduceIT extends 
BaseUniqueNamesOwnClusterIT {
 conn.commit();
 
 // configure Phoenix M/R job to read snapshot
-final Configuration conf = getUtility().getConfiguration();
-Job job = Job.getInstance(conf);
-Path tmpDir = getUtility().getRandomDir();
+conf = getUtility().getConfiguration();
+job = Job.getInstance(conf);
+tmpDir = getUtility().getRandomDir();
+  }
 
-
PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,SNAPSHOT_NAME,tableName,tmpDir,
 null, FIELD1, FIELD2, FIELD3);
+  @Test
+  public void testMapReduceSnapshots() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, false);
+  }
 
-// configure and test job
-configureJob(job, tableName, null, null);
+  @Test
+  public void testMapReduceSnapshotsMultiRegion() throws Exception {
+PhoenixMapReduceUtil.setInput(job,PhoenixIndexDBWritable.class,
+SNAPSHOT_NAME, tableName, tmpDir, null, FIELD1, FIELD2, FIELD3);
+configureJob(job, tableName, null, null, true);
   }
 
   @Test
   public void testMapReduceSnapshotsWithCondi

phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test

2018-11-15 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 3690c6323 -> cd31ed5e8


PHOENIX-5000 Make SecureUserConnectionsTest as Integration test


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cd31ed5e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cd31ed5e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cd31ed5e

Branch: refs/heads/master
Commit: cd31ed5e8cef8ad8b6d20f7417974c47002f4297
Parents: 3690c63
Author: Karan Mehta 
Authored: Tue Oct 30 12:40:00 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 15 14:50:05 2018 -0800

--
 .../phoenix/jdbc/SecureUserConnectionsIT.java   | 459 ++
 .../phoenix/jdbc/SecureUserConnectionsTest.java | 460 ---
 2 files changed, 459 insertions(+), 460 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cd31ed5e/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
new file mode 100644
index 000..eaf981b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
+import org.apache.phoenix.query.ConfigurationFactory;
+import org.apache.phoenix.util.InstanceResolver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
+ * trivial to directly test this, so we exploit the knowledge that the caching 
is driven by
+ * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
+ * collide and when they do not.
+ */
+public class SecureUserConnectionsIT {
+private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
+private static final int KDC_START_ATTEMPTS = 10;
+
+private static final File TEMP_DIR = new File(getClassTempDir());
+private static final File KEYTAB_DIR = new File(TEMP_DIR, "keytabs");
+private static final File KDC_DIR = new File(TEMP_DIR, "kdc");
+private static final List USER_KEYTAB_FILES = new ArrayList<>();
+private static final List SERVICE_KEYTAB_FILES = new ArrayList<>();
+private static final int NUM_USERS = 3;
+private static final Properties EMPTY_PROPERTIES = new Properties();
+private static final String BASE_URL = PhoenixRuntime.JDBC_PROTOCOL + 
":localhost:2181";
+
+private static MiniKdc KDC;
+
+@BeforeClass
+public static void setupKdc() throws Exception {
+ensureIsEmptyDirectory(KDC_DIR);
+ensureIsEmptyDirectory(KEYTAB_DIR);
+// Create and start the KDC. MiniKDC appears to have a race condition 
in how it does
+// port allocation (with ap

phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test

2018-11-15 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.4 590fec910 -> 3b39feec5


PHOENIX-5000 Make SecureUserConnectionsTest as Integration test


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3b39feec
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3b39feec
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3b39feec

Branch: refs/heads/4.x-HBase-1.4
Commit: 3b39feec5faadbe0f585e3818129f308815b06a1
Parents: 590fec9
Author: Karan Mehta 
Authored: Tue Oct 30 12:40:00 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 15 14:53:11 2018 -0800

--
 .../phoenix/jdbc/SecureUserConnectionsIT.java   | 459 +++
 .../phoenix/jdbc/SecureUserConnectionsTest.java | 459 ---
 2 files changed, 459 insertions(+), 459 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3b39feec/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
new file mode 100644
index 000..eaf981b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
+import org.apache.phoenix.query.ConfigurationFactory;
+import org.apache.phoenix.util.InstanceResolver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
+ * trivial to directly test this, so we exploit the knowledge that the caching 
is driven by
+ * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
+ * collide and when they do not.
+ */
+public class SecureUserConnectionsIT {
+private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
+private static final int KDC_START_ATTEMPTS = 10;
+
+private static final File TEMP_DIR = new File(getClassTempDir());
+private static final File KEYTAB_DIR = new File(TEMP_DIR, "keytabs");
+private static final File KDC_DIR = new File(TEMP_DIR, "kdc");
+private static final List USER_KEYTAB_FILES = new ArrayList<>();
+private static final List SERVICE_KEYTAB_FILES = new ArrayList<>();
+private static final int NUM_USERS = 3;
+private static final Properties EMPTY_PROPERTIES = new Properties();
+private static final String BASE_URL = PhoenixRuntime.JDBC_PROTOCOL + 
":localhost:2181";
+
+private static MiniKdc KDC;
+
+@BeforeClass
+public static void setupKdc() throws Exception {
+ensureIsEmptyDirectory(KDC_DIR);
+ensureIsEmptyDirectory(KEYTAB_DIR);
+// Create and start the KDC. MiniKDC appears to have a race condition 
in how it does
+// port allo

phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test

2018-11-15 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.3 87e770296 -> 4581516ea


PHOENIX-5000 Make SecureUserConnectionsTest as Integration test


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4581516e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4581516e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4581516e

Branch: refs/heads/4.x-HBase-1.3
Commit: 4581516eaea432962c9b61332a5f3f8117792823
Parents: 87e7702
Author: Karan Mehta 
Authored: Tue Oct 30 12:40:00 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 15 14:58:46 2018 -0800

--
 .../phoenix/jdbc/SecureUserConnectionsIT.java   | 459 +++
 .../phoenix/jdbc/SecureUserConnectionsTest.java | 459 ---
 2 files changed, 459 insertions(+), 459 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4581516e/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
new file mode 100644
index 000..eaf981b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
+import org.apache.phoenix.query.ConfigurationFactory;
+import org.apache.phoenix.util.InstanceResolver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
+ * trivial to directly test this, so we exploit the knowledge that the caching 
is driven by
+ * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
+ * collide and when they do not.
+ */
+public class SecureUserConnectionsIT {
+private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
+private static final int KDC_START_ATTEMPTS = 10;
+
+private static final File TEMP_DIR = new File(getClassTempDir());
+private static final File KEYTAB_DIR = new File(TEMP_DIR, "keytabs");
+private static final File KDC_DIR = new File(TEMP_DIR, "kdc");
+private static final List USER_KEYTAB_FILES = new ArrayList<>();
+private static final List SERVICE_KEYTAB_FILES = new ArrayList<>();
+private static final int NUM_USERS = 3;
+private static final Properties EMPTY_PROPERTIES = new Properties();
+private static final String BASE_URL = PhoenixRuntime.JDBC_PROTOCOL + 
":localhost:2181";
+
+private static MiniKdc KDC;
+
+@BeforeClass
+public static void setupKdc() throws Exception {
+ensureIsEmptyDirectory(KDC_DIR);
+ensureIsEmptyDirectory(KEYTAB_DIR);
+// Create and start the KDC. MiniKDC appears to have a race condition 
in how it does
+// port allo

phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test

2018-11-15 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/4.x-HBase-1.2 d35870043 -> 73bec6fff


PHOENIX-5000 Make SecureUserConnectionsTest as Integration test


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/73bec6ff
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/73bec6ff
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/73bec6ff

Branch: refs/heads/4.x-HBase-1.2
Commit: 73bec6fffdb0e4e806963d015c7a87fca8393abe
Parents: d358700
Author: Karan Mehta 
Authored: Tue Oct 30 12:40:00 2018 -0700
Committer: Karan Mehta 
Committed: Thu Nov 15 14:59:22 2018 -0800

--
 .../phoenix/jdbc/SecureUserConnectionsIT.java   | 459 +++
 .../phoenix/jdbc/SecureUserConnectionsTest.java | 459 ---
 2 files changed, 459 insertions(+), 459 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/73bec6ff/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
new file mode 100644
index 000..eaf981b
--- /dev/null
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -0,0 +1,459 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.jdbc;
+
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertNotNull;
+import static org.junit.Assert.assertTrue;
+
+import java.io.File;
+import java.io.IOException;
+import java.lang.reflect.Field;
+import java.security.PrivilegedExceptionAction;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Properties;
+
+import org.apache.commons.io.FileUtils;
+import org.apache.commons.logging.Log;
+import org.apache.commons.logging.LogFactory;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.CommonConfigurationKeys;
+import org.apache.hadoop.hbase.security.User;
+import org.apache.hadoop.minikdc.MiniKdc;
+import org.apache.hadoop.security.UserGroupInformation;
+import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
+import org.apache.phoenix.query.ConfigurationFactory;
+import org.apache.phoenix.util.InstanceResolver;
+import org.apache.phoenix.util.PhoenixRuntime;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.AfterClass;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+/**
+ * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
+ * trivial to directly test this, so we exploit the knowledge that the caching 
is driven by
+ * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
+ * collide and when they do not.
+ */
+public class SecureUserConnectionsIT {
+private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
+private static final int KDC_START_ATTEMPTS = 10;
+
+private static final File TEMP_DIR = new File(getClassTempDir());
+private static final File KEYTAB_DIR = new File(TEMP_DIR, "keytabs");
+private static final File KDC_DIR = new File(TEMP_DIR, "kdc");
+private static final List USER_KEYTAB_FILES = new ArrayList<>();
+private static final List SERVICE_KEYTAB_FILES = new ArrayList<>();
+private static final int NUM_USERS = 3;
+private static final Properties EMPTY_PROPERTIES = new Properties();
+private static final String BASE_URL = PhoenixRuntime.JDBC_PROTOCOL + 
":localhost:2181";
+
+private static MiniKdc KDC;
+
+@BeforeClass
+public static void setupKdc() throws Exception {
+ensureIsEmptyDirectory(KDC_DIR);
+ensureIsEmptyDirectory(KEYTAB_DIR);
+// Create and start the KDC. MiniKDC appears to have a race condition 
in how it does
+// port allo

phoenix git commit: PHOENIX-5000 Make SecureUserConnectionsTest as Integration test (Addendum)

2018-11-19 Thread karanmehta93
Repository: phoenix
Updated Branches:
  refs/heads/master 956755fab -> cfcf615d9


PHOENIX-5000 Make SecureUserConnectionsTest as Integration test (Addendum)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/cfcf615d
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/cfcf615d
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/cfcf615d

Branch: refs/heads/master
Commit: cfcf615d98c682df3b60aa7bd82c6706082bdac2
Parents: 956755f
Author: Karan Mehta 
Authored: Mon Nov 19 14:48:32 2018 -0800
Committer: Karan Mehta 
Committed: Mon Nov 19 14:48:32 2018 -0800

--
 .../it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java  | 3 +++
 1 file changed, 3 insertions(+)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/cfcf615d/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
index eaf981b..1ab54d2 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/jdbc/SecureUserConnectionsIT.java
@@ -39,6 +39,7 @@ import org.apache.hadoop.hbase.security.User;
 import org.apache.hadoop.minikdc.MiniKdc;
 import org.apache.hadoop.security.UserGroupInformation;
 import org.apache.hadoop.security.authentication.util.KerberosName;
+import org.apache.phoenix.end2end.NeedsOwnMiniClusterTest;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.ConnectionInfo;
 import org.apache.phoenix.query.ConfigurationFactory;
 import org.apache.phoenix.util.InstanceResolver;
@@ -47,6 +48,7 @@ import org.apache.phoenix.util.ReadOnlyProps;
 import org.junit.AfterClass;
 import org.junit.BeforeClass;
 import org.junit.Test;
+import org.junit.experimental.categories.Category;
 
 /**
  * Tests ConnectionQueryServices caching when Kerberos authentication is 
enabled. It's not
@@ -54,6 +56,7 @@ import org.junit.Test;
  * a ConcurrentHashMap. We can use a HashSet to determine when instances of 
ConnectionInfo
  * collide and when they do not.
  */
+@Category(NeedsOwnMiniClusterTest.class)
 public class SecureUserConnectionsIT {
 private static final Log LOG = 
LogFactory.getLog(SecureUserConnectionsIT.class);
 private static final int KDC_START_ATTEMPTS = 10;



  1   2   >