Apache-Phoenix | EncodeColumns | Build Successful

2016-12-22 Thread Apache Jenkins Server
encodecolumns2 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/encodecolumns2

Compiled Artifacts https://builds.apache.org/job/Phoenix-encode-columns/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-encode-columns/lastCompletedBuild/testReport/

Changes
No changes


Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


[37/42] phoenix git commit: Fix test failures

2016-12-22 Thread tdsilva
Fix test failures


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/4128b563
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/4128b563
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/4128b563

Branch: refs/heads/encodecolumns2
Commit: 4128b5632a6fa5524155362964edf5246b8741da
Parents: 3426262
Author: Samarth 
Authored: Tue Nov 22 22:32:42 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../org/apache/phoenix/compile/WhereCompilerTest.java |  4 ++--
 .../org/apache/phoenix/execute/MutationStateTest.java |  4 ++--
 .../org/apache/phoenix/query/ConnectionlessTest.java  | 14 ++
 3 files changed, 10 insertions(+), 12 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/4128b563/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
index c65408e..06c20d3 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
@@ -121,7 +121,7 @@ public class WhereCompilerTest extends 
BaseConnectionlessQueryTest {
 Filter filter = scan.getFilter();
 Expression idExpression = new ColumnRef(plan.getTableRef(), 
plan.getTableRef().getTable().getPColumnForColumnName("ID").getPosition()).newColumnExpression();
 Expression id = new RowKeyColumnExpression(idExpression,new 
RowKeyValueAccessor(plan.getTableRef().getTable().getPKColumns(),0));
-Expression company = new 
KeyValueColumnExpression(plan.getTableRef().getTable().getPColumnForColumnName("COMPANY"),
 false);
+Expression company = new 
KeyValueColumnExpression(plan.getTableRef().getTable().getPColumnForColumnName("COMPANY"),
 true);
 // FilterList has no equals implementation
 assertTrue(filter instanceof FilterList);
 FilterList filterList = (FilterList)filter;
@@ -153,7 +153,7 @@ public class WhereCompilerTest extends 
BaseConnectionlessQueryTest {
 assertEquals(
 singleKVFilter(constantComparison(
 CompareOp.EQUAL,
-new KeyValueColumnExpression(column, false),
+new KeyValueColumnExpression(column, true),
 "c3")),
 filter);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4128b563/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
index 276d946..8553b73 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/execute/MutationStateTest.java
@@ -127,11 +127,11 @@ public class MutationStateTest {
 private void assertTable(String tableName1,List 
keyValues1,String tableName2,List keyValues2) {
 assertTrue("MUTATION_TEST1".equals(tableName1));
 
assertTrue(Bytes.equals(PUnsignedInt.INSTANCE.toBytes(111),CellUtil.cloneRow(keyValues1.get(0;
-
assertTrue("app1".equals(PVarchar.INSTANCE.toObject(CellUtil.cloneValue(keyValues1.get(0);
+
assertTrue("app1".equals(PVarchar.INSTANCE.toObject(CellUtil.cloneValue(keyValues1.get(1);
 
 assertTrue("MUTATION_TEST2".equals(tableName2));
 
assertTrue(Bytes.equals(PUnsignedInt.INSTANCE.toBytes(222),CellUtil.cloneRow(keyValues2.get(0;
-
assertTrue("app2".equals(PVarchar.INSTANCE.toObject(CellUtil.cloneValue(keyValues2.get(0);
+
assertTrue("app2".equals(PVarchar.INSTANCE.toObject(CellUtil.cloneValue(keyValues2.get(1);
 
 }
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/4128b563/phoenix-core/src/test/java/org/apache/phoenix/query/ConnectionlessTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/query/ConnectionlessTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/ConnectionlessTest.java
index 089c5f1..84cc65c 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/query/ConnectionlessTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/query/ConnectionlessTest.java
@@ -141,31 +141,29 @@ public class ConnectionlessTest {
 assertTrue(iterator.hasNext());
 kv = 

[32/42] phoenix git commit: Introduce the notion of encoding schemes and storage schemes

2016-12-22 Thread tdsilva
Introduce the notion of encoding schemes and storage schemes


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8a1de1cb
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8a1de1cb
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8a1de1cb

Branch: refs/heads/encodecolumns2
Commit: 8a1de1cbc2ec9e0a9d309e05e38fb37e4d78ba08
Parents: 2b3265e
Author: Samarth 
Authored: Tue Nov 22 17:01:15 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../apache/phoenix/compile/FromCompiler.java|   3 +-
 .../apache/phoenix/compile/JoinCompiler.java|   3 +-
 .../compile/TupleProjectionCompiler.java|   6 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   3 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  15 +-
 .../coprocessor/generated/PTableProtos.java | 336 ---
 .../apache/phoenix/index/IndexMaintainer.java   |   2 +-
 .../phoenix/iterate/BaseResultIterators.java|   2 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   2 +
 .../apache/phoenix/query/QueryConstants.java|   4 +-
 .../apache/phoenix/schema/DelegateTable.java|   5 +
 .../apache/phoenix/schema/MetaDataClient.java   |  22 +-
 .../org/apache/phoenix/schema/PColumnImpl.java  |   6 +-
 .../java/org/apache/phoenix/schema/PTable.java  | 176 +-
 .../org/apache/phoenix/schema/PTableImpl.java   |  56 ++--
 .../apache/phoenix/util/EncodedColumnsUtil.java |   1 -
 .../java/org/apache/phoenix/util/ScanUtil.java  |   2 +-
 .../org/apache/phoenix/util/SchemaUtil.java |   4 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   3 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   3 +-
 .../apache/phoenix/util/PhoenixRuntimeTest.java |  12 +-
 phoenix-protocol/src/main/PTable.proto  |   5 +-
 22 files changed, 496 insertions(+), 175 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a1de1cb/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index f5df980..d4ccfff 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -71,6 +71,7 @@ import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.IndexType;
+import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTable.StorageScheme;
 import org.apache.phoenix.schema.PTableImpl;
 import org.apache.phoenix.schema.PTableKey;
@@ -787,7 +788,7 @@ public class FromCompiler {
 MetaDataProtocol.MIN_TABLE_TIMESTAMP, 
PTable.INITIAL_SEQ_NUM, null, null, columns, null, null,
 Collections. emptyList(), false, 
Collections. emptyList(), null, null, false, false,
 false, null, null, null, false, false, 0, 0L, SchemaUtil
-.isNamespaceMappingEnabled(PTableType.SUBQUERY, 
connection.getQueryServices().getProps()), null, false, 
StorageScheme.NON_ENCODED_COLUMN_NAMES, PTable.EncodedCQCounter.NULL_COUNTER);
+.isNamespaceMappingEnabled(PTableType.SUBQUERY, 
connection.getQueryServices().getProps()), null, false, 
StorageScheme.NON_ENCODED_COLUMN_NAMES, 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, 
PTable.EncodedCQCounter.NULL_COUNTER);
 
 String alias = subselectNode.getAlias();
 TableRef tableRef = new TableRef(alias, t, 
MetaDataProtocol.MIN_TABLE_TIMESTAMP, false);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a1de1cb/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
index 489b993..bc2c7df 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
@@ -76,6 +76,7 @@ import org.apache.phoenix.schema.PName;
 import org.apache.phoenix.schema.PNameFactory;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.IndexType;
+import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTable.StorageScheme;
 import org.apache.phoenix.schema.PTableImpl;
 import 

[16/42] phoenix git commit: PHOENIX-3516 Performance Issues with queries that have compound filters and specify phoenix.query.force.rowkeyorder=true

2016-12-22 Thread tdsilva
PHOENIX-3516 Performance Issues with queries that have compound filters and 
specify phoenix.query.force.rowkeyorder=true


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/54b7c218
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/54b7c218
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/54b7c218

Branch: refs/heads/encodecolumns2
Commit: 54b7c218d01ee26e5cbf16230ea2fc85d6ffa57f
Parents: 5706f51
Author: Thomas D'Silva 
Authored: Tue Dec 20 17:56:37 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 10:50:43 2016 -0800

--
 .../end2end/TenantSpecificViewIndexIT.java  | 47 
 .../apache/phoenix/compile/WhereCompiler.java   |  3 +-
 .../org/apache/phoenix/util/ExpressionUtil.java | 10 +
 .../phoenix/query/KeyRangeIntersectTest.java|  9 +++-
 4 files changed, 67 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/54b7c218/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
index 3519cf7..cc2e46a 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceName;
 import static 
org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceSchemaName;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -28,6 +29,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.util.Bytes;
@@ -284,4 +286,49 @@ public class TenantSpecificViewIndexIT extends 
BaseTenantSpecificViewIndexIT {
 assertEquals("value1", rs.getString(1));
 assertFalse("No other rows should have been returned for the tenant", 
rs.next()); // should have just returned one record since for org1 we have only 
one row.
 }
+
+@Test
+public void testOverlappingDatesFilter() throws SQLException {
+String tenantUrl = getUrl() + ';' + TENANT_ID_ATTRIB + "=tenant1" + 
";" + QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB + "=true";
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName 
++ "(ORGANIZATION_ID CHAR(15) NOT NULL, "
++ "PARENT_TYPE CHAR(3) NOT NULL, "
++ "PARENT_ID CHAR(15) NOT NULL,"
++ "CREATED_DATE DATE NOT NULL "
++ "CONSTRAINT PK PRIMARY KEY (ORGANIZATION_ID, PARENT_TYPE, 
PARENT_ID, CREATED_DATE DESC)"
++ ") VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1"; 
+
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection viewConn = DriverManager.getConnection(tenantUrl) ) 
{
+// create table
+conn.createStatement().execute(ddl);
+// create index
+conn.createStatement().execute("CREATE INDEX IF NOT EXISTS IDX ON 
" + tableName + "(PARENT_TYPE, CREATED_DATE, PARENT_ID)");
+// create view
+viewConn.createStatement().execute("CREATE VIEW IF NOT EXISTS " + 
viewName + " AS SELECT * FROM "+ tableName );
+
+String query ="EXPLAIN SELECT PARENT_ID FROM " + viewName
++ " WHERE PARENT_TYPE='001' "
++ "AND (CREATED_DATE > to_date('2011-01-01') AND 
CREATED_DATE < to_date('2016-10-31'))"
++ "ORDER BY PARENT_TYPE,CREATED_DATE LIMIT 501";
+
+ResultSet rs = viewConn.createStatement().executeQuery(query);
+String expectedPlanFormat = "CLIENT SERIAL 1-WAY RANGE SCAN OVER 
IDX ['tenant1','001','%s 00:00:00.001'] - ['tenant1','001','%s 
00:00:00.000']" + "\n" +
+"SERVER FILTER BY FIRST KEY ONLY" + "\n" +
+"SERVER 501 ROW LIMIT" + "\n" +
+"CLIENT 501 ROW LIMIT";
+assertEquals(String.format(expectedPlanFormat, "2011-01-01", 
"2016-10-31"), QueryUtil.getExplainPlan(rs));
+
+query 

[28/42] phoenix git commit: Fix test failure

2016-12-22 Thread tdsilva
Fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9eb690f6
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9eb690f6
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9eb690f6

Branch: refs/heads/encodecolumns2
Commit: 9eb690f6bcdc4f3810b59d9d666a972cc0040e9e
Parents: 2881091
Author: Samarth 
Authored: Tue Nov 8 12:26:45 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../phoenix/compile/CreateTableCompiler.java|  2 +-
 .../expression/ArrayColumnExpression.java   | 27 +++-
 .../expression/KeyValueColumnExpression.java|  2 +-
 3 files changed, 17 insertions(+), 14 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9eb690f6/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index c986c28..fae53e2 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -329,7 +329,7 @@ public class CreateTableCompiler {
 @Override
 public Boolean visit(ArrayColumnExpression node) {
 try {
-this.position = 
table.getColumnFamily(node.getColumnFamily()).getPColumnForColumnQualifier(node.getEncodedColumnQualifier()).getPosition();
+this.position = 
table.getColumnFamily(node.getColumnFamily()).getPColumnForColumnQualifier(node.getPositionInArray()).getPosition();
 } catch (SQLException e) {
 throw new RuntimeException(e);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9eb690f6/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
index f4616da..0b5e5d7 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
@@ -43,21 +43,21 @@ import org.apache.phoenix.util.SchemaUtil;
  */
 public class ArrayColumnExpression extends KeyValueColumnExpression {
 
-private int encodedCQ;
-private String displayName;
+private int positionInArray;
+private String arrayColDisplayName;
 
 public ArrayColumnExpression() {
 }
 
 public ArrayColumnExpression(PDatum column, byte[] cf, int encodedCQ) {
 super(column, cf, cf);
-this.encodedCQ = encodedCQ;
+this.positionInArray = encodedCQ;
 }
 
 public ArrayColumnExpression(PColumn column, String displayName, boolean 
encodedColumnName) {
 super(column, column.getFamilyName().getBytes(), 
column.getFamilyName().getBytes());
-this.displayName = 
SchemaUtil.getColumnDisplayName(column.getFamilyName().getString(), 
column.getName().getString());
-this.encodedCQ = column.getEncodedColumnQualifier();
+this.arrayColDisplayName = displayName;
+this.positionInArray = column.getEncodedColumnQualifier();
 }
 
 @Override
@@ -70,20 +70,20 @@ public class ArrayColumnExpression extends 
KeyValueColumnExpression {
 
 // Given a ptr to the entire array, set ptr to point to a particular 
element within that array
 // given the type of an array element (see comments in 
PDataTypeForArray)
-   PArrayDataType.positionAtArrayElement(ptr, encodedCQ, 
PVarbinary.INSTANCE, null);
+   PArrayDataType.positionAtArrayElement(ptr, positionInArray, 
PVarbinary.INSTANCE, null);
 return true;
 }
 
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-encodedCQ = WritableUtils.readVInt(input);
+positionInArray = WritableUtils.readVInt(input);
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-WritableUtils.writeVInt(output, encodedCQ);
+WritableUtils.writeVInt(output, positionInArray);
 }
 
 public KeyValueColumnExpression getKeyValueExpression() {
@@ -118,16 +118,19 @@ public class ArrayColumnExpression extends 
KeyValueColumnExpression {
public PDataType getDataType() {
   

[21/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
index 00ece40..15a9f74 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/CloneExpressionVisitor.java
@@ -26,6 +26,7 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
 import org.apache.phoenix.expression.Expression;
@@ -80,6 +81,11 @@ public abstract class CloneExpressionVisitor extends 
TraverseAllExpressionVisito
 public Expression visit(KeyValueColumnExpression node) {
 return node;
 }
+
+@Override
+public Expression visit(ArrayColumnExpression node) {
+return node;
+}
 
 @Override
 public Expression visit(ProjectedColumnExpression node) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
index 31f340d..100f099 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/ExpressionVisitor.java
@@ -27,6 +27,7 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
 import org.apache.phoenix.expression.Expression;
@@ -113,6 +114,7 @@ public interface ExpressionVisitor {
 public E visit(LiteralExpression node);
 public E visit(RowKeyColumnExpression node);
 public E visit(KeyValueColumnExpression node);
+public E visit(ArrayColumnExpression node);
 public E visit(ProjectedColumnExpression node);
 public E visit(SequenceValueExpression node);
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
index 3b7067a..9e50bc4 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseAllExpressionVisitor.java
@@ -26,9 +26,9 @@ import 
org.apache.phoenix.expression.ArrayConstructorExpression;
 import org.apache.phoenix.expression.CaseExpression;
 import org.apache.phoenix.expression.CoerceExpression;
 import org.apache.phoenix.expression.ComparisonExpression;
+import org.apache.phoenix.expression.ArrayColumnExpression;
 import org.apache.phoenix.expression.CorrelateVariableFieldAccessExpression;
 import org.apache.phoenix.expression.DivideExpression;
-import org.apache.phoenix.expression.Expression;
 import org.apache.phoenix.expression.InListExpression;
 import org.apache.phoenix.expression.IsNullExpression;
 import org.apache.phoenix.expression.KeyValueColumnExpression;
@@ -121,6 +121,11 @@ public class StatelessTraverseAllExpressionVisitor 
extends TraverseAllExpress
 }
 
 @Override
+public E visit(ArrayColumnExpression node) {
+return null;
+}
+
+@Override
 public E visit(ProjectedColumnExpression node) {
 return null;
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/expression/visitor/StatelessTraverseNoExpressionVisitor.java
--

[18/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java
index c28a2bf..3774837 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/ResultTuple.java
@@ -17,6 +17,8 @@
  */
 package org.apache.phoenix.schema.tuple;
 
+import java.util.Collections;
+
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.client.Result;
@@ -25,25 +27,23 @@ import org.apache.hadoop.hbase.util.Bytes;
 import org.apache.phoenix.hbase.index.util.GenericKeyValueBuilder;
 import org.apache.phoenix.util.KeyValueUtil;
 
-
+/**
+ * 
+ * Wrapper around {@link Result} that implements Phoenix's {@link Tuple} 
interface.
+ *
+ */
 public class ResultTuple extends BaseTuple {
-private Result result;
+private final Result result;
+public static final ResultTuple EMPTY_TUPLE = new 
ResultTuple(Result.create(Collections.emptyList()));
 
 public ResultTuple(Result result) {
 this.result = result;
 }
 
-public ResultTuple() {
-}
-
 public Result getResult() {
 return this.result;
 }
 
-public void setResult(Result result) {
-this.result = result;
-}
-
 @Override
 public void getKey(ImmutableBytesWritable ptr) {
 ptr.set(result.getRow());
@@ -104,4 +104,4 @@ public class ResultTuple extends BaseTuple {
 ptr.set(kv.getValueArray(), kv.getValueOffset(), kv.getValueLength());
 return true;
 }
-}
+}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/Tuple.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/Tuple.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/Tuple.java
index 61b2a4f..e4a887b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/Tuple.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/tuple/Tuple.java
@@ -17,6 +17,8 @@
  */
 package org.apache.phoenix.schema.tuple;
 
+import java.util.List;
+
 import org.apache.hadoop.hbase.Cell;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 
@@ -87,4 +89,6 @@ public interface Tuple {
  * @return the current or next sequence value
  */
 public long getSequenceValue(int index);
+
+public void setKeyValues(List values);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/util/EncodedColumnsUtil.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/EncodedColumnsUtil.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/EncodedColumnsUtil.java
new file mode 100644
index 000..aeb4e46
--- /dev/null
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/EncodedColumnsUtil.java
@@ -0,0 +1,108 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.util;
+
+import static com.google.common.base.Preconditions.checkArgument;
+
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PColumn;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTable.StorageScheme;
+import org.apache.phoenix.schema.types.PInteger;
+
+public class EncodedColumnsUtil {
+
+public static boolean usesEncodedColumnNames(PTable table) {
+return usesEncodedColumnNames(table.getStorageScheme());
+}
+
+public static boolean usesEncodedColumnNames(StorageScheme storageScheme) {
+return storageScheme != null && storageScheme != 

[15/42] phoenix git commit: PHOENIX-3540 Fix Time data type in Phoenix Spark integration

2016-12-22 Thread tdsilva
PHOENIX-3540 Fix Time data type in Phoenix Spark integration


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/5706f514
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/5706f514
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/5706f514

Branch: refs/heads/encodecolumns2
Commit: 5706f514416079e9afda7c59474492a74c704473
Parents: f11d501
Author: Ankit Singhal 
Authored: Thu Dec 22 13:18:35 2016 +0530
Committer: Ankit Singhal 
Committed: Thu Dec 22 13:18:35 2016 +0530

--
 phoenix-spark/src/it/resources/globalSetup.sql   |  2 ++
 .../org/apache/phoenix/spark/PhoenixSparkIT.scala| 12 
 .../scala/org/apache/phoenix/spark/PhoenixRDD.scala  | 15 ---
 3 files changed, 22 insertions(+), 7 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/5706f514/phoenix-spark/src/it/resources/globalSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index 852687e..72f8620 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -48,6 +48,8 @@ CREATE TABLE TEST_SMALL_TINY (ID BIGINT NOT NULL PRIMARY KEY, 
COL1 SMALLINT, COL
 UPSERT INTO TEST_SMALL_TINY VALUES (1, 32767, 127)
 CREATE TABLE DATE_TEST(ID BIGINT NOT NULL PRIMARY KEY, COL1 DATE)
 UPSERT INTO DATE_TEST VALUES(1, CURRENT_DATE())
+CREATE TABLE TIME_TEST(ID BIGINT NOT NULL PRIMARY KEY, COL1 TIME)
+UPSERT INTO TIME_TEST VALUES(1, CURRENT_TIME())
 CREATE TABLE "space" ("key" VARCHAR PRIMARY KEY, "first name" VARCHAR)
 UPSERT INTO "space" VALUES ('key1', 'xyz')
 CREATE TABLE "small" ("key" VARCHAR PRIMARY KEY, "first name" VARCHAR, 
"salary" INTEGER )

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5706f514/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
index 90822d4..fc8483d 100644
--- a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
+++ b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
@@ -620,4 +620,16 @@ class PhoenixSparkIT extends AbstractPhoenixSparkIT {
 
 assert(Math.abs(epoch - ts) < 30)
   }
+
+  test("Can load Phoenix Time columns through DataFrame API") {
+val sqlContext = new SQLContext(sc)
+val df = sqlContext.read
+  .format("org.apache.phoenix.spark")
+  .options(Map("table" -> "TIME_TEST", "zkUrl" -> quorumAddress))
+  .load
+val time = df.select("COL1").first().getTimestamp(0).getTime
+val epoch = new Date().getTime
+assert(Math.abs(epoch - time) < 8640)
+  }
+
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/5706f514/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
--
diff --git 
a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala 
b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
index 505de1b..204a7ef 100644
--- a/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
+++ b/phoenix-spark/src/main/scala/org/apache/phoenix/spark/PhoenixRDD.scala
@@ -130,13 +130,14 @@ class PhoenixRDD(sc: SparkContext, table: String, 
columns: Seq[String],
   val rowSeq = columns.map { case (name, sqlType) =>
 val res = pr.resultMap(name)
 
-// Special handling for data types
-if(dateAsTimestamp && sqlType == 91) { // 91 is the defined type for 
Date
-  new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
-}
-else {
-  res
-}
+  // Special handling for data types
+  if (dateAsTimestamp && sqlType == 91) { // 91 is the defined type 
for Date
+new java.sql.Timestamp(res.asInstanceOf[java.sql.Date].getTime)
+  } else if (sqlType == 92) { // 92 is the defined type for Time
+new java.sql.Timestamp(res.asInstanceOf[java.sql.Time].getTime)
+  } else {
+res
+  }
   }
 
   // Create a Spark Row from the sequence



[02/42] phoenix git commit: Set version to 4.10.0-HBase-0.98-SNAPSHOT after release

2016-12-22 Thread tdsilva
Set version to 4.10.0-HBase-0.98-SNAPSHOT after release


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/af202f2e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/af202f2e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/af202f2e

Branch: refs/heads/encodecolumns2
Commit: af202f2efe001a6167f85692cfd865e69064d083
Parents: f18e780
Author: Mujtaba 
Authored: Mon Nov 28 15:56:17 2016 -0800
Committer: Mujtaba 
Committed: Mon Nov 28 15:56:17 2016 -0800

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index e8a1900..a221dca 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index a21303c..f5ce71c 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index 9396882..5984dec 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index b013518..0b21357 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index 6aa2f61..c0cc6fd 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index 20fc1cc..d7b66bf 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   5.0.0-HBase-0.98-SNAPSHOT
+   4.10.0-HBase-0.98-SNAPSHOT

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 1e7ca4e..945aa2d 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-5.0.0-HBase-0.98-SNAPSHOT
+4.10.0-HBase-0.98-SNAPSHOT
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/af202f2e/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index 88e6eff..e2621ef 100644
--- a/phoenix-queryserver-client/pom.xml
+++ b/phoenix-queryserver-client/pom.xml
@@ -27,7 +27,7 @@
   
  

[17/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
--
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java 
b/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
index 5feedb1..5409554 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
@@ -304,11 +304,11 @@ public class TestUtil {
 }
 
 public static Expression constantComparison(CompareOp op, PColumn c, 
Object o) {
-return  new ComparisonExpression(Arrays.asList(new 
KeyValueColumnExpression(c), LiteralExpression.newConstant(o)), op);
+return  new ComparisonExpression(Arrays.asList(new 
KeyValueColumnExpression(c, true), LiteralExpression.newConstant(o)), op);
 }
 
 public static Expression kvColumn(PColumn c) {
-return new KeyValueColumnExpression(c);
+return new KeyValueColumnExpression(c, true);
 }
 
 public static Expression pkColumn(PColumn c, List columns) {
@@ -610,7 +610,7 @@ public class TestUtil {
 }
 
 public static void analyzeTable(Connection conn, String tableName) throws 
IOException, SQLException {
-   analyzeTable(conn, tableName, false);
+analyzeTable(conn, tableName, false);
 }
 
 public static void analyzeTable(Connection conn, String tableName, boolean 
transactional) throws IOException, SQLException {
@@ -652,17 +652,17 @@ public class TestUtil {
 Date date = new Date(DateUtil.parseDate("2015-01-01 
00:00:00").getTime() + (i - 1) * MILLIS_IN_DAY);
 stmt.setDate(6, date);
 }
-   
+
 public static void validateRowKeyColumns(ResultSet rs, int i) throws 
SQLException {
-   assertTrue(rs.next());
-   assertEquals(rs.getString(1), "varchar" + String.valueOf(i));
-   assertEquals(rs.getString(2), "char" + String.valueOf(i));
-   assertEquals(rs.getInt(3), i);
-   assertEquals(rs.getInt(4), i);
-   assertEquals(rs.getBigDecimal(5), new BigDecimal(i*0.5d));
-   Date date = new Date(DateUtil.parseDate("2015-01-01 
00:00:00").getTime() + (i - 1) * MILLIS_IN_DAY);
-   assertEquals(rs.getDate(6), date);
-   }
+assertTrue(rs.next());
+assertEquals(rs.getString(1), "varchar" + String.valueOf(i));
+assertEquals(rs.getString(2), "char" + String.valueOf(i));
+assertEquals(rs.getInt(3), i);
+assertEquals(rs.getInt(4), i);
+assertEquals(rs.getBigDecimal(5), new BigDecimal(i*0.5d));
+Date date = new Date(DateUtil.parseDate("2015-01-01 
00:00:00").getTime() + (i - 1) * MILLIS_IN_DAY);
+assertEquals(rs.getDate(6), date);
+}
 
 public static String getTableName(Boolean mutable, Boolean transactional) {
 StringBuilder tableNameBuilder = new 
StringBuilder(DEFAULT_DATA_TABLE_NAME);
@@ -694,7 +694,7 @@ public class TestUtil {
 
 @Override
 public SortOrder getSortOrder() {
-   return SortOrder.getDefault();
+return SortOrder.getDefault();
 }
 
 @Override
@@ -720,11 +720,15 @@ public class TestUtil {
 public boolean isRowTimestamp() {
 return false;
 }
-   @Override
-   public boolean isDynamic() {
-   return false;
-   }
-})), null);
+@Override
+public boolean isDynamic() {
+return false;
+}
+@Override
+public Integer getEncodedColumnQualifier() {
+return null;
+}
+}, false)), null);
 aggregationManager.setAggregators(new 
ClientAggregators(Collections.singletonList(func), 1));
 ClientAggregators aggregators = 
aggregationManager.getAggregators();
 return aggregators;
@@ -821,4 +825,3 @@ public class TestUtil {
 }
 
 }
-

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-protocol/src/main/PTable.proto
--
diff --git a/phoenix-protocol/src/main/PTable.proto 
b/phoenix-protocol/src/main/PTable.proto
index a16263f..d5df2f3 100644
--- a/phoenix-protocol/src/main/PTable.proto
+++ b/phoenix-protocol/src/main/PTable.proto
@@ -47,6 +47,7 @@ message PColumn {
   optional string expression = 12;
   optional bool isRowTimestamp = 13;
   optional bool isDynamic = 14;
+  optional int32 columnQualifier = 15;
 }
 
 message PTableStats {
@@ -95,4 +96,11 @@ message PTable {
   optional string autoParititonSeqName = 31;
   optional bool 

[06/42] phoenix git commit: PHOENIX-2084 Support loading JSON data using phoenix-flume (Kalyan Hadoop)

2016-12-22 Thread tdsilva
PHOENIX-2084 Support loading JSON data using phoenix-flume (Kalyan Hadoop)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/9b06c60e
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/9b06c60e
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/9b06c60e

Branch: refs/heads/encodecolumns2
Commit: 9b06c60e6f1a5010bbe8d123accf75f890283e6b
Parents: 8a231a9
Author: Josh Mahonin 
Authored: Thu Dec 15 14:38:15 2016 -0500
Committer: Josh Mahonin 
Committed: Thu Dec 15 14:39:58 2016 -0500

--
 phoenix-flume/pom.xml   |  12 +
 .../phoenix/flume/JsonEventSerializerIT.java| 541 +++
 .../apache/phoenix/flume/FlumeConstants.java|   7 +-
 .../flume/serializer/BaseEventSerializer.java   |   4 +-
 .../flume/serializer/EventSerializers.java  |   2 +-
 .../flume/serializer/JsonEventSerializer.java   | 226 
 6 files changed, 789 insertions(+), 3 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b06c60e/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 0b21357..8046ff2 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -176,6 +176,18 @@
   test
 
 
+
+
+  org.json
+  json
+  20160212
+
+
+  com.jayway.jsonpath
+  json-path
+  2.2.0
+
+
 
 
   org.apache.flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/9b06c60e/phoenix-flume/src/it/java/org/apache/phoenix/flume/JsonEventSerializerIT.java
--
diff --git 
a/phoenix-flume/src/it/java/org/apache/phoenix/flume/JsonEventSerializerIT.java 
b/phoenix-flume/src/it/java/org/apache/phoenix/flume/JsonEventSerializerIT.java
new file mode 100644
index 000..0210bad
--- /dev/null
+++ 
b/phoenix-flume/src/it/java/org/apache/phoenix/flume/JsonEventSerializerIT.java
@@ -0,0 +1,541 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.flume;
+
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+
+import org.apache.flume.Channel;
+import org.apache.flume.Context;
+import org.apache.flume.Event;
+import org.apache.flume.EventDeliveryException;
+import org.apache.flume.Transaction;
+import org.apache.flume.channel.MemoryChannel;
+import org.apache.flume.conf.Configurables;
+import org.apache.flume.event.EventBuilder;
+import org.apache.flume.lifecycle.LifecycleState;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.end2end.BaseHBaseManagedTimeIT;
+import org.apache.phoenix.flume.serializer.EventSerializers;
+import org.apache.phoenix.flume.sink.PhoenixSink;
+import org.apache.phoenix.util.PropertiesUtil;
+import org.junit.Test;
+
+import com.google.common.base.Preconditions;
+import com.google.common.collect.Lists;
+import com.google.common.collect.Maps;
+
+public class JsonEventSerializerIT extends BaseHBaseManagedTimeIT {
+
+   private Context sinkContext;
+   private PhoenixSink sink;
+
+   @Test
+   public void testWithOutColumnsMapping() throws EventDeliveryException, 
SQLException {
+
+   final String fullTableName = "FLUME_JSON_TEST";
+
+   String ddl = "CREATE TABLE IF NOT EXISTS " + fullTableName
+   + "  (flume_time timestamp not null, col1 
varchar , col2 double, col3 varchar[], col4 integer[]"
+   + "  CONSTRAINT pk PRIMARY KEY (flume_time))\n";
+   String columns = "col1,col2,col3,col4";
+   String rowkeyType = 

[36/42] phoenix git commit: PHOENIX-3295 Remove ReplaceArrayColumnWithKeyValueColumnExpressionVisitor

2016-12-22 Thread tdsilva
PHOENIX-3295 Remove ReplaceArrayColumnWithKeyValueColumnExpressionVisitor


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/aa7450fe
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/aa7450fe
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/aa7450fe

Branch: refs/heads/encodecolumns2
Commit: aa7450fe356c9ad63436fbb4a21e36f5682cd693
Parents: 37836a0
Author: Thomas D'Silva 
Authored: Tue Nov 22 12:32:34 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../apache/phoenix/compile/WhereCompiler.java   |  3 ++-
 .../expression/ArrayColumnExpression.java   | 28 +---
 .../apache/phoenix/index/IndexMaintainer.java   | 11 
 .../apache/phoenix/query/QueryConstants.java|  7 +++--
 .../org/apache/phoenix/schema/PTableImpl.java   |  4 +--
 5 files changed, 32 insertions(+), 21 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/aa7450fe/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
index 6bb3563..598b433 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereCompiler.java
@@ -46,6 +46,7 @@ import org.apache.phoenix.parse.ParseNodeFactory;
 import org.apache.phoenix.parse.SelectStatement;
 import org.apache.phoenix.parse.StatelessTraverseAllParseNodeVisitor;
 import org.apache.phoenix.parse.SubqueryParseNode;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.AmbiguousColumnException;
 import org.apache.phoenix.schema.ColumnNotFoundException;
 import org.apache.phoenix.schema.ColumnRef;
@@ -174,7 +175,7 @@ public class WhereCompiler {
 Expression newColumnExpression = 
ref.newColumnExpression(node.isTableNameCaseSensitive(), 
node.isCaseSensitive());
 if (tableRef.equals(context.getCurrentTable()) && 
!SchemaUtil.isPKColumn(ref.getColumn())) {
 byte[] cq = tableRef.getTable().getStorageScheme() == 
StorageScheme.ONE_CELL_PER_COLUMN_FAMILY 
-   ? ref.getColumn().getFamilyName().getBytes() : 
EncodedColumnsUtil.getColumnQualifier(ref.getColumn(), tableRef.getTable());
+   ? 
QueryConstants.SINGLE_KEYVALUE_COLUMN_QUALIFIER_BYTES : 
EncodedColumnsUtil.getColumnQualifier(ref.getColumn(), tableRef.getTable());
 // track the where condition columns. Later we need to ensure 
the Scan in HRS scans these column CFs
 
context.addWhereCoditionColumn(ref.getColumn().getFamilyName().getBytes(), cq);
 }

http://git-wip-us.apache.org/repos/asf/phoenix/blob/aa7450fe/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
index 0b5e5d7..f09fb62 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/ArrayColumnExpression.java
@@ -25,6 +25,7 @@ import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.io.WritableUtils;
 import 
org.apache.phoenix.compile.CreateTableCompiler.ViewWhereExpressionVisitor;
 import org.apache.phoenix.expression.visitor.ExpressionVisitor;
+import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PDatum;
 import org.apache.phoenix.schema.SortOrder;
@@ -45,19 +46,22 @@ public class ArrayColumnExpression extends 
KeyValueColumnExpression {
 
 private int positionInArray;
 private String arrayColDisplayName;
+private KeyValueColumnExpression keyValueColumnExpression;
 
 public ArrayColumnExpression() {
 }
 
 public ArrayColumnExpression(PDatum column, byte[] cf, int encodedCQ) {
-super(column, cf, cf);
+super(column, cf, 
QueryConstants.SINGLE_KEYVALUE_COLUMN_QUALIFIER_BYTES);
 this.positionInArray = encodedCQ;
+setKeyValueExpression();
 }
 
 public ArrayColumnExpression(PColumn column, String displayName, boolean 
encodedColumnName) {
-super(column, column.getFamilyName().getBytes(), 
column.getFamilyName().getBytes());
+super(column, column.getFamilyName().getBytes(), 

[14/42] phoenix git commit: PHOENIX-3535 Addendum to fix test failures

2016-12-22 Thread tdsilva
PHOENIX-3535 Addendum to fix test failures


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f11d501b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f11d501b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f11d501b

Branch: refs/heads/encodecolumns2
Commit: f11d501b29483a3e543768f3f5e267bf2dd413b5
Parents: 90d27be
Author: Samarth 
Authored: Wed Dec 21 17:44:49 2016 -0800
Committer: Samarth 
Committed: Wed Dec 21 17:44:49 2016 -0800

--
 .../java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f11d501b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 9d7a3d2..f1de0bd 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2352,7 +2352,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 hConnectionEstablished = true;
 boolean isDoNotUpgradePropSet = 
UpgradeUtil.isNoUpgradeSet(props);
 try (HBaseAdmin admin = getAdmin()) {
-createSysMutexTable(admin);
 boolean mappedSystemCatalogExists = admin
 
.tableExists(SchemaUtil.getPhysicalTableName(SYSTEM_CATALOG_NAME_BYTES, true));
 if 
(SchemaUtil.isNamespaceMappingEnabled(PTableType.SYSTEM,
@@ -2370,6 +2369,7 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 + " is found but client does 
not have "
 + IS_NAMESPACE_MAPPING_ENABLED 
+ " enabled")
 .build().buildException(); }
+createSysMutexTable(admin);
 }
 Properties scnProps = 
PropertiesUtil.deepCopy(props);
 
scnProps.setProperty(PhoenixRuntime.CURRENT_SCN_ATTRIB,



[30/42] phoenix git commit: Make connection less tests use encoded column names. Add test cases around dynamic columns and qualifier ranges

2016-12-22 Thread tdsilva
Make connection less tests use encoded column names. Add test cases around 
dynamic columns and qualifier ranges


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/37836a01
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/37836a01
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/37836a01

Branch: refs/heads/encodecolumns2
Commit: 37836a01f612df885f8b5bfdf2ba7bbfa38d9150
Parents: 1bddaa0
Author: Samarth 
Authored: Wed Nov 23 16:09:58 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../apache/phoenix/end2end/DynamicColumnIT.java | 63 
 .../apache/phoenix/schema/MetaDataClient.java   |  5 +-
 .../java/org/apache/phoenix/schema/PTable.java  |  1 -
 .../apache/phoenix/util/EncodedColumnsUtil.java |  3 +-
 .../phoenix/compile/QueryOptimizerTest.java | 51 
 .../phoenix/compile/WhereCompilerTest.java  |  4 +-
 .../phoenix/execute/MutationStateTest.java  |  4 +-
 .../phoenix/query/ConnectionlessTest.java   | 16 +++--
 8 files changed, 130 insertions(+), 17 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/37836a01/phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicColumnIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicColumnIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicColumnIT.java
index 25e7230..3f02113 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicColumnIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DynamicColumnIT.java
@@ -213,5 +213,68 @@ public class DynamicColumnIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testDynamicColumnOnNewTable() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "create table " + tableName + 
+"   (entry varchar not null," +
+"F varchar," +
+"A.F1v1 varchar," +
+"A.F1v2 varchar," +
+"B.F2v1 varchar" +
+"CONSTRAINT pk PRIMARY KEY (entry))";
+String dml = "UPSERT INTO " + tableName + " values (?, ?, ?, ?, ?)";
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+try (PreparedStatement stmt = conn.prepareStatement(dml)) {
+stmt.setString(1, "entry");
+stmt.setString(2, "a");
+stmt.setString(3, "b");
+stmt.setString(4, "c");
+stmt.setString(5, "d");
+stmt.executeUpdate();
+conn.commit();
+}
+dml = "UPSERT INTO " + tableName + "(entry, F, A.F1V1, A.F1v2, 
B.F2V1, DYNCOL1 VARCHAR, DYNCOL2 VARCHAR) VALUES (?, ?, ?, ?, ?, ?, ?)";
+try (PreparedStatement stmt = conn.prepareStatement(dml)) {
+stmt.setString(1, "dynentry");
+stmt.setString(2, "a");
+stmt.setString(3, "b");
+stmt.setString(4, "c");
+stmt.setString(5, "d");
+stmt.setString(6, "e");
+stmt.setString(7, "f");
+stmt.executeUpdate();
+conn.commit();
+}
+
+// test dynamic column in where clause
+String query = "SELECT entry, F from " + tableName + " (DYNCOL1 
VARCHAR, DYNCOL2 VARCHAR) " + " WHERE DYNCOL1 = ?";
+try (PreparedStatement stmt = conn.prepareStatement(query)) {
+stmt.setString(1, "e");
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("dynentry", rs.getString(1));
+assertEquals("a", rs.getString(2));
+assertFalse(rs.next());
+}
+
+// test dynamic column with projection
+query = "SELECT entry, dyncol1, dyncol2 from " + tableName + " 
(DYNCOL1 VARCHAR, DYNCOL2 VARCHAR) ";
+try (PreparedStatement stmt = conn.prepareStatement(query)) {
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("dynentry", rs.getString(1));
+assertEquals("e", rs.getString(2));
+assertEquals("f", rs.getString(3));
+assertTrue(rs.next());
+assertEquals("entry", rs.getString(1));
+assertEquals(null, rs.getString(2));
+assertEquals(null, rs.getString(2));
+assertFalse(rs.next());
+}
+}
+ 

[10/42] phoenix git commit: PHOENIX-3532 Pass tenantId parameter to PhoenixRDD when reading (Nico Pappagianis)

2016-12-22 Thread tdsilva
PHOENIX-3532 Pass tenantId parameter to PhoenixRDD when reading (Nico 
Pappagianis)


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e2020209
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e2020209
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e2020209

Branch: refs/heads/encodecolumns2
Commit: e2020209316572b1575abda5057196e4678439ee
Parents: 42a84b3
Author: Josh Mahonin 
Authored: Mon Dec 19 11:15:15 2016 -0500
Committer: Josh Mahonin 
Committed: Mon Dec 19 11:16:15 2016 -0500

--
 phoenix-spark/src/it/resources/tenantSetup.sql  |  1 +
 .../phoenix/spark/AbstractPhoenixSparkIT.scala  |  5 -
 .../spark/PhoenixSparkITTenantSpecific.scala| 99 +---
 .../org/apache/phoenix/spark/PhoenixRDD.scala   | 16 +++-
 .../phoenix/spark/SparkContextFunctions.scala   |  4 +-
 .../spark/SparkSqlContextFunctions.scala|  6 +-
 6 files changed, 85 insertions(+), 46 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e2020209/phoenix-spark/src/it/resources/tenantSetup.sql
--
diff --git a/phoenix-spark/src/it/resources/tenantSetup.sql 
b/phoenix-spark/src/it/resources/tenantSetup.sql
index 4a866dc..f62d843 100644
--- a/phoenix-spark/src/it/resources/tenantSetup.sql
+++ b/phoenix-spark/src/it/resources/tenantSetup.sql
@@ -15,3 +15,4 @@
 -- limitations under the License.
 
 CREATE VIEW IF NOT EXISTS TENANT_VIEW(TENANT_ONLY_COL VARCHAR) AS SELECT * 
FROM MULTITENANT_TEST_TABLE
+UPSERT INTO TENANT_VIEW (ORGANIZATION_ID, TENANT_ONLY_COL) VALUES 
('defaultOrg', 'defaultData')

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e2020209/phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
index f81438f..ecaedc7 100644
--- 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
+++ 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
@@ -46,11 +46,6 @@ class AbstractPhoenixSparkIT extends FunSuite with Matchers 
with BeforeAndAfterA
   // A global tenantId we can use across tests
   final val TenantId = "theTenant"
 
-  // TENANT_VIEW schema
-  val OrgId = "ORGANIZATION_ID"
-  val TenantCol = "TENANT_ONLY_COL"
-  val ViewName = "TENANT_VIEW"
-
   var conn: Connection = _
   var sc: SparkContext = _
 

http://git-wip-us.apache.org/repos/asf/phoenix/blob/e2020209/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
--
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
index a1c1e22..77b41af 100644
--- 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
+++ 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkITTenantSpecific.scala
@@ -19,7 +19,10 @@ import org.apache.spark.sql.SQLContext
 import scala.collection.mutable.ListBuffer
 
 /**
-  * Sub-class of PhoenixSparkIT used for tenant-specific test
+  * Sub-class of PhoenixSparkIT used for tenant-specific tests
+  *
+  * Note: All schema related variables (table name, column names, default 
data, etc) are coupled with
+  * phoenix-spark/src/it/resources/tenantSetup.sql
   *
   * Note: If running directly from an IDE, these are the recommended VM 
parameters:
   * -Xmx1536m -XX:MaxPermSize=512m -XX:ReservedCodeCacheSize=512m
@@ -27,10 +30,24 @@ import scala.collection.mutable.ListBuffer
   */
 class PhoenixSparkITTenantSpecific extends AbstractPhoenixSparkIT {
 
-  val SelectStatement = "SELECT " + OrgId + "," + TenantCol + " FROM " + 
ViewName
-  val DataSet = List(("testOrg1", "data1"), ("testOrg2", "data2"), 
("testOrg3", "data3"))
+  // Tenant-specific schema info
+  val OrgIdCol = "ORGANIZATION_ID"
+  val TenantOnlyCol = "TENANT_ONLY_COL"
+  val TenantTable = "TENANT_VIEW"
+
+  // Data set for tests that write to Phoenix
+  val TestDataSet = List(("testOrg1", "data1"), ("testOrg2", "data2"), 
("testOrg3", "data3"))
 
+  /**
+* Helper method used by write tests to verify content written.
+* Assumes the caller has written the TestDataSet (defined above) to Phoenix
+* and that 1 row of default data exists (upserted after table creation in 
tenantSetup.sql)
+*/
   def verifyResults(): Unit = {
+// Contains the default data upserted into the tenant-specific table 

[42/42] phoenix git commit: Fix test failure

2016-12-22 Thread tdsilva
Fix test failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/11c0c3b7
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/11c0c3b7
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/11c0c3b7

Branch: refs/heads/encodecolumns2
Commit: 11c0c3b7b40f34f130554363a85ef0de7edd31a3
Parents: a41074a
Author: Thomas D'Silva 
Authored: Thu Dec 22 17:23:04 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 17:23:04 2016 -0800

--
 .../org/apache/phoenix/schema/PTableImpl.java | 18 --
 1 file changed, 12 insertions(+), 6 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/11c0c3b7/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 7f11faf..8ae1988 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -64,6 +64,7 @@ import org.apache.phoenix.parse.ParseNode;
 import org.apache.phoenix.parse.SQLParser;
 import org.apache.phoenix.protobuf.ProtobufUtil;
 import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.schema.PTable.EncodedCQCounter;
 import org.apache.phoenix.schema.RowKeySchema.RowKeySchemaBuilder;
 import org.apache.phoenix.schema.tuple.Tuple;
 import org.apache.phoenix.schema.types.PArrayDataType;
@@ -1335,12 +1336,17 @@ public class PTableImpl implements PTable {
 if (table.hasEncodingScheme()) {
 qualifierEncodingScheme = 
QualifierEncodingScheme.fromSerializedValue(table.getEncodingScheme().toByteArray()[0]);
 }
-EncodedCQCounter encodedColumnQualifierCounter = 
EncodedColumnsUtil.usesEncodedColumnNames(qualifierEncodingScheme) ? new 
EncodedCQCounter() : EncodedCQCounter.NULL_COUNTER;
-if (table.getEncodedCQCountersList() != null) {
-encodedColumnQualifierCounter = new EncodedCQCounter();
-for 
(org.apache.phoenix.coprocessor.generated.PTableProtos.EncodedCQCounter 
cqCounterFromProto : table.getEncodedCQCountersList()) {
-
encodedColumnQualifierCounter.setValue(cqCounterFromProto.getColFamily(), 
cqCounterFromProto.getCounter());
-}
+EncodedCQCounter encodedColumnQualifierCounter = null;
+if 
((!EncodedColumnsUtil.usesEncodedColumnNames(qualifierEncodingScheme) || 
tableType == PTableType.VIEW)) {
+   encodedColumnQualifierCounter = 
PTable.EncodedCQCounter.NULL_COUNTER;
+}
+else {
+   encodedColumnQualifierCounter = new EncodedCQCounter();
+   if (table.getEncodedCQCountersList() != null) {
+   for 
(org.apache.phoenix.coprocessor.generated.PTableProtos.EncodedCQCounter 
cqCounterFromProto : table.getEncodedCQCountersList()) {
+   
encodedColumnQualifierCounter.setValue(cqCounterFromProto.getColFamily(), 
cqCounterFromProto.getCounter());
+   }
+   }
 }
 
 try {



[41/42] phoenix git commit: Refactor code to store and use column qualifiers in SYSTEM.CATALOG

2016-12-22 Thread tdsilva
Refactor code to store and use column qualifiers in SYSTEM.CATALOG


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/a41074a9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/a41074a9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/a41074a9

Branch: refs/heads/encodecolumns2
Commit: a41074a9601901c80ac4c37a69ef606bffe51171
Parents: be6861c
Author: Samarth 
Authored: Wed Dec 21 13:15:02 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:03:52 2016 -0800

--
 .../apache/phoenix/end2end/AlterTableIT.java|   7 +-
 .../apache/phoenix/end2end/StoreNullsIT.java|  10 +-
 .../phoenix/end2end/index/DropMetadataIT.java   |   4 +-
 .../apache/phoenix/compile/FromCompiler.java|  19 ++-
 .../apache/phoenix/compile/JoinCompiler.java|   9 +-
 .../phoenix/compile/ListJarsQueryPlan.java  |   6 +-
 .../apache/phoenix/compile/PostDDLCompiler.java |   2 +-
 .../phoenix/compile/ProjectionCompiler.java |  14 +-
 .../apache/phoenix/compile/TraceQueryPlan.java  |   4 +-
 .../compile/TupleProjectionCompiler.java|   8 +-
 .../apache/phoenix/compile/UnionCompiler.java   |   3 +-
 .../apache/phoenix/compile/WhereCompiler.java   |   2 +-
 .../coprocessor/BaseScannerRegionObserver.java  |   1 +
 .../GroupedAggregateRegionObserver.java |   6 +-
 .../coprocessor/MetaDataEndpointImpl.java   |  26 +--
 .../phoenix/coprocessor/ScanRegionObserver.java |  11 +-
 .../UngroupedAggregateRegionObserver.java   |   3 +-
 .../coprocessor/generated/PTableProtos.java | 157 ++-
 .../apache/phoenix/execute/BaseQueryPlan.java   |   2 +-
 .../expression/ArrayColumnExpression.java   |  30 +++-
 .../expression/KeyValueColumnExpression.java|  30 ++--
 .../apache/phoenix/index/IndexMaintainer.java   |  29 ++--
 .../phoenix/iterate/BaseResultIterators.java|  23 +--
 .../iterate/RegionScannerResultIterator.java|   7 +-
 .../phoenix/jdbc/PhoenixDatabaseMetaData.java   |   4 +-
 .../mapreduce/FormatToBytesWritableMapper.java  |   2 +-
 .../mapreduce/FormatToKeyValueReducer.java  |   2 +-
 .../apache/phoenix/query/QueryConstants.java|  18 ++-
 .../org/apache/phoenix/schema/ColumnRef.java|   6 +-
 .../apache/phoenix/schema/DelegateColumn.java   |   4 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  34 ++--
 .../java/org/apache/phoenix/schema/PColumn.java |   2 +-
 .../phoenix/schema/PColumnFamilyImpl.java   |  41 +++--
 .../org/apache/phoenix/schema/PColumnImpl.java  |  28 ++--
 .../apache/phoenix/schema/PMetaDataImpl.java|   2 +-
 .../java/org/apache/phoenix/schema/PTable.java  | 135 +++-
 .../org/apache/phoenix/schema/PTableImpl.java   | 118 ++
 .../apache/phoenix/schema/ProjectedColumn.java  |  11 +-
 .../tuple/EncodedColumnQualiferCellsList.java   |  19 ++-
 .../tuple/PositionBasedMultiKeyValueTuple.java  |   3 +-
 .../schema/tuple/PositionBasedResultTuple.java  |   3 +-
 .../apache/phoenix/util/EncodedColumnsUtil.java |  77 +
 .../java/org/apache/phoenix/util/IndexUtil.java |   9 +-
 .../phoenix/compile/WhereCompilerTest.java  |   4 +-
 .../phoenix/execute/CorrelatePlanTest.java  |   3 +-
 .../execute/LiteralResultIteratorPlanTest.java  |   3 +-
 .../phoenix/execute/UnnestArrayPlanTest.java|   7 +-
 .../expression/ColumnExpressionTest.java|  35 +++--
 .../EncodedColumnQualifierCellsListTest.java|  98 ++--
 .../java/org/apache/phoenix/util/TestUtil.java  |  10 +-
 phoenix-protocol/src/main/PTable.proto  |   2 +-
 51 files changed, 601 insertions(+), 492 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/a41074a9/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 3084a92..91e9964 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -20,8 +20,8 @@ package org.apache.phoenix.end2end;
 import static 
org.apache.hadoop.hbase.HColumnDescriptor.DEFAULT_REPLICATION_SCOPE;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_QUALIFIER;
 import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_QUALIFIER_COUNTER;
-import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.ENCODED_COLUMN_QUALIFIER;
 import static 

[05/42] phoenix git commit: TTL on UPGRADE_LOCK cell causes upgrade to fail

2016-12-22 Thread tdsilva
TTL on UPGRADE_LOCK cell causes upgrade to fail


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/8a231a94
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/8a231a94
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/8a231a94

Branch: refs/heads/encodecolumns2
Commit: 8a231a94ceb07f8b3967a15bde07b694a92ae58e
Parents: 96b6554
Author: Samarth 
Authored: Mon Dec 12 12:01:38 2016 -0800
Committer: Samarth 
Committed: Mon Dec 12 12:01:38 2016 -0800

--
 .../phoenix/query/ConnectionQueryServicesImpl.java  | 16 ++--
 1 file changed, 14 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/8a231a94/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 9bc088d..c8d42d9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2997,8 +2997,20 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 Put put = new Put(rowToLock);
 put.add(family, qualifier, newValue);
 boolean acquired =  sysMutexTable.checkAndPut(rowToLock, family, 
qualifier, oldValue, put);
-if (!acquired) { throw new 
UpgradeInProgressException(getVersion(currentServerSideTableTimestamp),
-getVersion(MIN_SYSTEM_TABLE_TIMESTAMP)); }
+if (!acquired) {
+/*
+ * Because of TTL on the SYSTEM_MUTEX_FAMILY, it is very much 
possible that the cell
+ * has gone away. So we need to retry with an old value of 
null. Note there is a small
+ * race condition here that between the two checkAndPut calls, 
it is possible that another
+ * request would have set the value back to 
UPGRADE_MUTEX_UNLOCKED. In that scenario this
+ * following checkAndPut would still return false even though 
the lock was available.
+ */
+acquired =  sysMutexTable.checkAndPut(rowToLock, family, 
qualifier, null, put);
+if (!acquired) {
+throw new 
UpgradeInProgressException(getVersion(currentServerSideTableTimestamp),
+getVersion(MIN_SYSTEM_TABLE_TIMESTAMP));
+}
+}
 return true;
 }
 }



[27/42] phoenix git commit: Fix compilation failure

2016-12-22 Thread tdsilva
Fix compilation failure


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/28810916
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/28810916
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/28810916

Branch: refs/heads/encodecolumns2
Commit: 288109165a7bd03f1e949dbed0da0bdd33cbed4f
Parents: 670b53a
Author: Samarth 
Authored: Mon Nov 7 13:27:06 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../src/main/java/org/apache/phoenix/util/PhoenixRuntime.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/28810916/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java 
b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
index a946575..563ccab 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java
@@ -1148,9 +1148,9 @@ public class PhoenixRuntime {
 PColumn pColumn = null;
 if (familyName != null) {
 PColumnFamily family = table.getColumnFamily(familyName);
-pColumn = family.getColumn(columnName);
+pColumn = family.getPColumnForColumnName(columnName);
 } else {
-pColumn = table.getColumn(columnName);
+pColumn = table.getPColumnForColumnName(columnName);
 }
 return pColumn;
 }



[40/42] phoenix git commit: Refactor code to store and use column qualifiers in SYSTEM.CATALOG

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/a41074a9/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 6357e52..3cca2de 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -26,7 +26,6 @@ import static 
org.apache.phoenix.monitoring.GlobalClientMetrics.GLOBAL_QUERY_TIM
 import static org.apache.phoenix.schema.PTable.IndexType.LOCAL;
 import static org.apache.phoenix.schema.PTableType.INDEX;
 import static org.apache.phoenix.util.ByteUtil.EMPTY_BYTE_ARRAY;
-import static 
org.apache.phoenix.util.EncodedColumnsUtil.getEncodedColumnQualifier;
 
 import java.io.ByteArrayInputStream;
 import java.io.DataInput;
@@ -87,6 +86,7 @@ import org.apache.phoenix.query.QueryServicesOptions;
 import org.apache.phoenix.schema.PColumnFamily;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.schema.PTable.IndexType;
+import org.apache.phoenix.schema.PTable.QualifierEncodingScheme;
 import org.apache.phoenix.schema.PTable.StorageScheme;
 import org.apache.phoenix.schema.PTable.ViewType;
 import org.apache.phoenix.schema.StaleRegionBoundaryCacheException;
@@ -248,13 +248,14 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 ScanUtil.andFilterAtEnd(scan, new 
PageFilter(plan.getLimit()));
 }
 }
+
scan.setAttribute(BaseScannerRegionObserver.QUALIFIER_ENCODING_SCHEME, new 
byte[]{table.getEncodingScheme().getSerializedMetadataValue()});
 // When analyzing the table, there is no look up for key values 
being done.
 // So there is no point setting the range.
 if (EncodedColumnsUtil.setQualifierRanges(table) && 
!ScanUtil.isAnalyzeTable(scan)) {
 Pair range = getEncodedQualifierRange(scan, 
context);
 if (range != null) {
-scan.setAttribute(BaseScannerRegionObserver.MIN_QUALIFIER, 
getEncodedColumnQualifier(range.getFirst()));
-scan.setAttribute(BaseScannerRegionObserver.MAX_QUALIFIER, 
getEncodedColumnQualifier(range.getSecond()));
+scan.setAttribute(BaseScannerRegionObserver.MIN_QUALIFIER, 
Bytes.toBytes(range.getFirst()));
+scan.setAttribute(BaseScannerRegionObserver.MAX_QUALIFIER, 
Bytes.toBytes(range.getSecond()));
 }
 }
 if (optimizeProjection) {
@@ -266,25 +267,25 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 private static Pair getEncodedQualifierRange(Scan scan, 
StatementContext context)
 throws SQLException {
 PTable table = context.getCurrentTable().getTable();
-StorageScheme storageScheme = table.getStorageScheme();
-checkArgument(storageScheme == 
StorageScheme.ONE_CELL_PER_KEYVALUE_COLUMN,
+QualifierEncodingScheme encodingScheme = table.getEncodingScheme();
+checkArgument(encodingScheme != 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS,
 "Method should only be used for tables using encoded column 
names");
 Pair minMaxQualifiers = new Pair<>();
 for (Pair whereCol : 
context.getWhereConditionColumns()) {
 byte[] cq = whereCol.getSecond();
 if (cq != null) {
-int qualifier = getEncodedColumnQualifier(cq);
+int qualifier = table.getEncodingScheme().getDecodedValue(cq);
 determineQualifierRange(qualifier, minMaxQualifiers);
 }
 }
 Map> familyMap = scan.getFamilyMap();
 
-Map> qualifierRanges = 
EncodedColumnsUtil.getQualifierRanges(table);
+Map> qualifierRanges = 
EncodedColumnsUtil.getFamilyQualifierRanges(table);
 for (Entry> entry : familyMap.entrySet()) 
{
 if (entry.getValue() != null) {
 for (byte[] cq : entry.getValue()) {
 if (cq != null) {
-int qualifier = getEncodedColumnQualifier(cq);
+int qualifier = 
table.getEncodingScheme().getDecodedValue(cq);
 determineQualifierRange(qualifier, minMaxQualifiers);
 }
 }
@@ -299,8 +300,10 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 family = IndexUtil.getLocalIndexColumnFamily(family);
 }
 

[01/42] phoenix git commit: Set version to 5.0.0-HBase-0.98-SNAPSHOT after release [Forced Update!]

2016-12-22 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/encodecolumns2 f955c025c -> 11c0c3b7b (forced update)


Set version to 5.0.0-HBase-0.98-SNAPSHOT after release


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f18e780c
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f18e780c
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f18e780c

Branch: refs/heads/encodecolumns2
Commit: f18e780c3b11141aaf438d595bc7f541a3fb21e3
Parents: 30bb79b
Author: Mujtaba 
Authored: Wed Nov 23 09:26:01 2016 -0800
Committer: Mujtaba 
Committed: Wed Nov 23 09:26:01 2016 -0800

--
 phoenix-assembly/pom.xml   | 2 +-
 phoenix-client/pom.xml | 2 +-
 phoenix-core/pom.xml   | 2 +-
 phoenix-flume/pom.xml  | 2 +-
 phoenix-hive/pom.xml   | 2 +-
 phoenix-pherf/pom.xml  | 2 +-
 phoenix-pig/pom.xml| 2 +-
 phoenix-queryserver-client/pom.xml | 2 +-
 phoenix-queryserver/pom.xml| 2 +-
 phoenix-server/pom.xml | 2 +-
 phoenix-spark/pom.xml  | 2 +-
 phoenix-tracing-webapp/pom.xml | 2 +-
 pom.xml| 2 +-
 13 files changed, 13 insertions(+), 13 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-assembly/pom.xml
--
diff --git a/phoenix-assembly/pom.xml b/phoenix-assembly/pom.xml
index 8ff1618..e8a1900 100644
--- a/phoenix-assembly/pom.xml
+++ b/phoenix-assembly/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-assembly
   Phoenix Assembly

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-client/pom.xml
--
diff --git a/phoenix-client/pom.xml b/phoenix-client/pom.xml
index 2c30342..a21303c 100644
--- a/phoenix-client/pom.xml
+++ b/phoenix-client/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-client
   Phoenix Client

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-core/pom.xml
--
diff --git a/phoenix-core/pom.xml b/phoenix-core/pom.xml
index ea3f316..9396882 100644
--- a/phoenix-core/pom.xml
+++ b/phoenix-core/pom.xml
@@ -4,7 +4,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-core
   Phoenix Core

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-flume/pom.xml
--
diff --git a/phoenix-flume/pom.xml b/phoenix-flume/pom.xml
index 236e06a..b013518 100644
--- a/phoenix-flume/pom.xml
+++ b/phoenix-flume/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-flume
   Phoenix - Flume

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-hive/pom.xml
--
diff --git a/phoenix-hive/pom.xml b/phoenix-hive/pom.xml
index c36e737..6aa2f61 100644
--- a/phoenix-hive/pom.xml
+++ b/phoenix-hive/pom.xml
@@ -27,7 +27,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-hive
   Phoenix - Hive

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-pherf/pom.xml
--
diff --git a/phoenix-pherf/pom.xml b/phoenix-pherf/pom.xml
index bf74445..20fc1cc 100644
--- a/phoenix-pherf/pom.xml
+++ b/phoenix-pherf/pom.xml
@@ -15,7 +15,7 @@

org.apache.phoenix
phoenix
-   4.9.0-HBase-0.98
+   5.0.0-HBase-0.98-SNAPSHOT

 
phoenix-pherf

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-pig/pom.xml
--
diff --git a/phoenix-pig/pom.xml b/phoenix-pig/pom.xml
index 6292d81..1e7ca4e 100644
--- a/phoenix-pig/pom.xml
+++ b/phoenix-pig/pom.xml
@@ -26,7 +26,7 @@
   
 org.apache.phoenix
 phoenix
-4.9.0-HBase-0.98
+5.0.0-HBase-0.98-SNAPSHOT
   
   phoenix-pig
   Phoenix - Pig

http://git-wip-us.apache.org/repos/asf/phoenix/blob/f18e780c/phoenix-queryserver-client/pom.xml
--
diff --git a/phoenix-queryserver-client/pom.xml 
b/phoenix-queryserver-client/pom.xml
index d0e321e..88e6eff 100644
--- a/phoenix-queryserver-client/pom.xml
+++ 

[34/42] phoenix git commit: Fix some more test failures

2016-12-22 Thread tdsilva
Fix some more test failures


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/ffb984f4
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/ffb984f4
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/ffb984f4

Branch: refs/heads/encodecolumns2
Commit: ffb984f44f4841314ad2ea2328a9d5b284174929
Parents: 4128b56
Author: Samarth 
Authored: Wed Nov 23 00:52:04 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../phoenix/iterate/BaseResultIterators.java|  2 +-
 .../apache/phoenix/schema/MetaDataClient.java   |  1 +
 .../java/org/apache/phoenix/schema/PTable.java  | 27 -
 .../org/apache/phoenix/schema/PTableImpl.java   |  3 ++
 .../apache/phoenix/util/EncodedColumnsUtil.java | 31 
 .../org/apache/phoenix/util/SchemaUtil.java | 27 +
 6 files changed, 63 insertions(+), 28 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/ffb984f4/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
index 96797a9..6357e52 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BaseResultIterators.java
@@ -279,7 +279,7 @@ public abstract class BaseResultIterators extends 
ExplainTable implements Result
 }
 Map> familyMap = scan.getFamilyMap();
 
-Map> qualifierRanges = 
SchemaUtil.getQualifierRanges(table);
+Map> qualifierRanges = 
EncodedColumnsUtil.getQualifierRanges(table);
 for (Entry> entry : familyMap.entrySet()) 
{
 if (entry.getValue() != null) {
 for (byte[] cq : entry.getValue()) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ffb984f4/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index d3b9596..8936b9b 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -2076,6 +2076,7 @@ public class MetaDataClient {
  * then we rely on the PTable, with appropriate storage 
scheme, returned in the MetadataMutationResult to be updated 
  * in the client cache. If the phoenix table already doesn't 
exist then the non-encoded column qualifier scheme works
  * because we cannot control the column qualifiers that were 
used when populating the hbase table.
+ * TODO: samarth add a test case for this
  */
 if (parent != null) {
 storageScheme = parent.getStorageScheme();

http://git-wip-us.apache.org/repos/asf/phoenix/blob/ffb984f4/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
index b9565a1..1ee2320 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
@@ -213,6 +213,11 @@ public interface PTable extends PMetaDataEntity {
 public boolean isEncodeable(String value) {
 return true;
 }
+
+@Override
+public String toString() {
+return "NON_ENCODED_QUALIFIERS";
+}
 };
 public static final QualifierEncodingScheme ONE_BYTE_QUALIFIERS = new 
QualifierEncodingScheme((byte)1, "ONE_BYTE_QUALIFIERS", 255l) {
 @Override
@@ -234,6 +239,11 @@ public interface PTable extends PMetaDataEntity {
 public boolean isEncodeable(Long value) {
 return true;
 }
+
+@Override
+public String toString() {
+return "ONE_BYTE_QUALIFIERS";
+}
 };
 public static final QualifierEncodingScheme TWO_BYTE_QUALIFIERS = new 
QualifierEncodingScheme((byte)2, "TWO_BYTE_QUALIFIERS", 65535l) {
   

[22/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
index b8b8b2f..2f0c00b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/generated/PTableProtos.java
@@ -269,6 +269,16 @@ public final class PTableProtos {
  * optional bool isDynamic = 14;
  */
 boolean getIsDynamic();
+
+// optional int32 columnQualifier = 15;
+/**
+ * optional int32 columnQualifier = 15;
+ */
+boolean hasColumnQualifier();
+/**
+ * optional int32 columnQualifier = 15;
+ */
+int getColumnQualifier();
   }
   /**
* Protobuf type {@code PColumn}
@@ -391,6 +401,11 @@ public final class PTableProtos {
   isDynamic_ = input.readBool();
   break;
 }
+case 120: {
+  bitField0_ |= 0x4000;
+  columnQualifier_ = input.readInt32();
+  break;
+}
   }
 }
   } catch (com.google.protobuf.InvalidProtocolBufferException e) {
@@ -709,6 +724,22 @@ public final class PTableProtos {
   return isDynamic_;
 }
 
+// optional int32 columnQualifier = 15;
+public static final int COLUMNQUALIFIER_FIELD_NUMBER = 15;
+private int columnQualifier_;
+/**
+ * optional int32 columnQualifier = 15;
+ */
+public boolean hasColumnQualifier() {
+  return ((bitField0_ & 0x4000) == 0x4000);
+}
+/**
+ * optional int32 columnQualifier = 15;
+ */
+public int getColumnQualifier() {
+  return columnQualifier_;
+}
+
 private void initFields() {
   columnNameBytes_ = com.google.protobuf.ByteString.EMPTY;
   familyNameBytes_ = com.google.protobuf.ByteString.EMPTY;
@@ -724,6 +755,7 @@ public final class PTableProtos {
   expression_ = "";
   isRowTimestamp_ = false;
   isDynamic_ = false;
+  columnQualifier_ = 0;
 }
 private byte memoizedIsInitialized = -1;
 public final boolean isInitialized() {
@@ -799,6 +831,9 @@ public final class PTableProtos {
   if (((bitField0_ & 0x2000) == 0x2000)) {
 output.writeBool(14, isDynamic_);
   }
+  if (((bitField0_ & 0x4000) == 0x4000)) {
+output.writeInt32(15, columnQualifier_);
+  }
   getUnknownFields().writeTo(output);
 }
 
@@ -864,6 +899,10 @@ public final class PTableProtos {
 size += com.google.protobuf.CodedOutputStream
   .computeBoolSize(14, isDynamic_);
   }
+  if (((bitField0_ & 0x4000) == 0x4000)) {
+size += com.google.protobuf.CodedOutputStream
+  .computeInt32Size(15, columnQualifier_);
+  }
   size += getUnknownFields().getSerializedSize();
   memoizedSerializedSize = size;
   return size;
@@ -957,6 +996,11 @@ public final class PTableProtos {
 result = result && (getIsDynamic()
 == other.getIsDynamic());
   }
+  result = result && (hasColumnQualifier() == other.hasColumnQualifier());
+  if (hasColumnQualifier()) {
+result = result && (getColumnQualifier()
+== other.getColumnQualifier());
+  }
   result = result &&
   getUnknownFields().equals(other.getUnknownFields());
   return result;
@@ -1026,6 +1070,10 @@ public final class PTableProtos {
 hash = (37 * hash) + ISDYNAMIC_FIELD_NUMBER;
 hash = (53 * hash) + hashBoolean(getIsDynamic());
   }
+  if (hasColumnQualifier()) {
+hash = (37 * hash) + COLUMNQUALIFIER_FIELD_NUMBER;
+hash = (53 * hash) + getColumnQualifier();
+  }
   hash = (29 * hash) + getUnknownFields().hashCode();
   memoizedHashCode = hash;
   return hash;
@@ -1163,6 +1211,8 @@ public final class PTableProtos {
 bitField0_ = (bitField0_ & ~0x1000);
 isDynamic_ = false;
 bitField0_ = (bitField0_ & ~0x2000);
+columnQualifier_ = 0;
+bitField0_ = (bitField0_ & ~0x4000);
 return this;
   }
 
@@ -1247,6 +1297,10 @@ public final class PTableProtos {
   to_bitField0_ |= 0x2000;
 }
 result.isDynamic_ = isDynamic_;
+if (((from_bitField0_ & 0x4000) == 0x4000)) {
+  to_bitField0_ |= 0x4000;
+}
+result.columnQualifier_ = columnQualifier_;
 result.bitField0_ = to_bitField0_;
 onBuilt();
 return result;
@@ -1309,6 +1363,9 @@ public final class PTableProtos {
 if (other.hasIsDynamic()) {
   setIsDynamic(other.getIsDynamic());
 }
+if 

[33/42] phoenix git commit: Replace StorageScheme.NON_ENCODED_QUALIFIER with QualifierEncodingScheme.NON_ENCODED_QUALIFIERS

2016-12-22 Thread tdsilva
Replace StorageScheme.NON_ENCODED_QUALIFIER with 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/3426262a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/3426262a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/3426262a

Branch: refs/heads/encodecolumns2
Commit: 3426262afb720ab0a42b0db35c501665c7cfd9c4
Parents: 8a1de1c
Author: Samarth 
Authored: Tue Nov 22 19:11:36 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../apache/phoenix/compile/FromCompiler.java|  2 +-
 .../apache/phoenix/compile/JoinCompiler.java|  2 +-
 .../compile/PostLocalIndexDDLCompiler.java  |  2 +-
 .../compile/TupleProjectionCompiler.java|  7 +--
 .../apache/phoenix/compile/UnionCompiler.java   |  2 +-
 .../apache/phoenix/compile/WhereCompiler.java   |  2 +-
 .../GroupedAggregateRegionObserver.java | 12 ++---
 .../coprocessor/MetaDataEndpointImpl.java   |  6 ++-
 .../phoenix/coprocessor/ScanRegionObserver.java |  7 +--
 .../UngroupedAggregateRegionObserver.java   |  6 +--
 .../apache/phoenix/execute/BaseQueryPlan.java   |  2 +-
 .../apache/phoenix/index/IndexMaintainer.java   | 10 +---
 .../phoenix/iterate/BaseResultIterators.java| 11 ++---
 .../iterate/RegionScannerResultIterator.java|  4 +-
 .../org/apache/phoenix/schema/ColumnRef.java|  6 +--
 .../apache/phoenix/schema/MetaDataClient.java   | 51 +---
 .../java/org/apache/phoenix/schema/PTable.java  |  8 ++-
 .../org/apache/phoenix/schema/PTableImpl.java   | 12 ++---
 .../apache/phoenix/util/EncodedColumnsUtil.java | 41 ++--
 .../java/org/apache/phoenix/util/IndexUtil.java |  3 +-
 .../java/org/apache/phoenix/util/ScanUtil.java  | 29 ---
 .../org/apache/phoenix/util/SchemaUtil.java |  4 +-
 .../phoenix/execute/CorrelatePlanTest.java  |  2 +-
 .../execute/LiteralResultIteratorPlanTest.java  |  2 +-
 24 files changed, 109 insertions(+), 124 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/3426262a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index d4ccfff..8d00996 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -788,7 +788,7 @@ public class FromCompiler {
 MetaDataProtocol.MIN_TABLE_TIMESTAMP, 
PTable.INITIAL_SEQ_NUM, null, null, columns, null, null,
 Collections. emptyList(), false, 
Collections. emptyList(), null, null, false, false,
 false, null, null, null, false, false, 0, 0L, SchemaUtil
-.isNamespaceMappingEnabled(PTableType.SUBQUERY, 
connection.getQueryServices().getProps()), null, false, 
StorageScheme.NON_ENCODED_COLUMN_NAMES, 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, 
PTable.EncodedCQCounter.NULL_COUNTER);
+.isNamespaceMappingEnabled(PTableType.SUBQUERY, 
connection.getQueryServices().getProps()), null, false, 
StorageScheme.ONE_CELL_PER_KEYVALUE_COLUMN, 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, 
PTable.EncodedCQCounter.NULL_COUNTER);
 
 String alias = subselectNode.getAlias();
 TableRef tableRef = new TableRef(alias, t, 
MetaDataProtocol.MIN_TABLE_TIMESTAMP, false);

http://git-wip-us.apache.org/repos/asf/phoenix/blob/3426262a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
index bc2c7df..b72c550 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/JoinCompiler.java
@@ -1313,7 +1313,7 @@ public class JoinCompiler {
 left.isMultiTenant(), left.getStoreNulls(), 
left.getViewType(), left.getViewIndexId(),
 left.getIndexType(), left.rowKeyOrderOptimizable(), 
left.isTransactional(),
 left.getUpdateCacheFrequency(), 
left.getIndexDisableTimestamp(), left.isNamespaceMapped(), 
-left.getAutoPartitionSeqName(), left.isAppendOnlySchema(), 
StorageScheme.NON_ENCODED_COLUMN_NAMES, 
QualifierEncodingScheme.NON_ENCODED_QUALIFIERS, 
PTable.EncodedCQCounter.NULL_COUNTER);
+

[29/42] phoenix git commit: Ignore DefaultColumnValueIT#testDefaultImmutableRows till PHOENIX-3442 is fixed

2016-12-22 Thread tdsilva
Ignore DefaultColumnValueIT#testDefaultImmutableRows till PHOENIX-3442 is fixed


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/2b3265e0
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/2b3265e0
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/2b3265e0

Branch: refs/heads/encodecolumns2
Commit: 2b3265e058a1d3b9176795e62285c8e96ef2ccfe
Parents: 9eb690f
Author: Samarth 
Authored: Mon Nov 21 19:14:34 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java  | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/2b3265e0/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
index 62d79bc..8302604 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DefaultColumnValueIT.java
@@ -37,6 +37,7 @@ import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.util.ByteUtil;
 import org.apache.phoenix.util.DateUtil;
 import org.junit.Before;
+import org.junit.Ignore;
 import org.junit.Test;
 
 
@@ -257,7 +258,7 @@ public class DefaultColumnValueIT extends 
ParallelStatsDisabledIT {
 assertFalse(rs.next());
 }
 
-@Test
+@Ignore //FIXME: PHOENIX-3442
 public void testDefaultImmutableRows() throws Exception {
 String table = generateUniqueName();
 String ddl = "CREATE TABLE IF NOT EXISTS " + table + " (" +



[19/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
index 0e1337c..8df6a95 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PName.java
@@ -83,6 +83,32 @@ public interface PName {
 return 0;
 }
 };
+public static PName ENCODED_EMPTY_COLUMN_NAME = new PName() {
+@Override
+public String getString() {
+return String.valueOf(QueryConstants.ENCODED_EMPTY_COLUMN_NAME);
+}
+
+@Override
+public byte[] getBytes() {
+return QueryConstants.ENCODED_EMPTY_COLUMN_BYTES;
+}
+
+@Override
+public String toString() {
+return getString();
+}
+
+@Override
+public ImmutableBytesPtr getBytesPtr() {
+return QueryConstants.ENCODED_EMPTY_COLUMN_BYTES_PTR;
+}
+
+@Override
+public int getEstimatedSize() {
+return 0;
+}
+};
 /**
  * Get the client-side, normalized name as referenced
  * in a SQL statement.

http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
--
diff --git a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
index 01e8afe..d3b11b2 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTable.java
@@ -17,7 +17,15 @@
  */
 package org.apache.phoenix.schema;
 
+import static 
org.apache.phoenix.query.QueryConstants.ENCODED_CQ_COUNTER_INITIAL_VALUE;
+
+import java.util.Collections;
+import java.util.HashMap;
 import java.util.List;
+import java.util.Map;
+import java.util.Map.Entry;
+
+import javax.annotation.Nullable;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -129,7 +137,7 @@ public interface PTable extends PMetaDataEntity {
  * Link from a view to its parent table
  */
 PARENT_TABLE((byte)3);
-
+
 private final byte[] byteValue;
 private final byte serializedValue;
 
@@ -153,6 +161,35 @@ public interface PTable extends PMetaDataEntity {
 return LinkType.values()[serializedValue-1];
 }
 }
+
+public enum StorageScheme {
+ENCODED_COLUMN_NAMES((byte)1),
+NON_ENCODED_COLUMN_NAMES((byte)2),
+COLUMNS_STORED_IN_SINGLE_CELL((byte)3);
+
+private final byte[] byteValue;
+private final byte serializedValue;
+
+StorageScheme(byte serializedValue) {
+this.serializedValue = serializedValue;
+this.byteValue = Bytes.toBytes(this.name());
+}
+
+public byte[] getBytes() {
+return byteValue;
+}
+
+public byte getSerializedValue() {
+return this.serializedValue;
+}
+
+public static StorageScheme fromSerializedValue(byte serializedValue) {
+if (serializedValue < 1 || serializedValue > 
StorageScheme.values().length) {
+return null;
+}
+return StorageScheme.values()[serializedValue-1];
+}
+}
 
 long getTimeStamp();
 long getSequenceNumber();
@@ -208,7 +245,16 @@ public interface PTable extends PMetaDataEntity {
  * can be found
  * @throws AmbiguousColumnException if multiple columns are found with the 
given name
  */
-PColumn getColumn(String name) throws ColumnNotFoundException, 
AmbiguousColumnException;
+PColumn getPColumnForColumnName(String name) throws 
ColumnNotFoundException, AmbiguousColumnException;
+
+/**
+ * Get the column with the given column qualifier.
+ * @param column qualifier bytes
+ * @return the PColumn with the given column qualifier
+ * @throws ColumnNotFoundException if no column with the given column 
qualifier can be found
+ * @throws AmbiguousColumnException if multiple columns are found with the 
given column qualifier
+ */
+PColumn getPColumnForColumnQualifier(byte[] cf, byte[] cq) throws 
ColumnNotFoundException, AmbiguousColumnException; 
 
 /**
  * Get the PK column with the given name.
@@ -345,7 +391,6 @@ public interface PTable extends PMetaDataEntity {
  */
 int getRowTimestampColPos();
 long getUpdateCacheFrequency();
-
 boolean isNamespaceMapped();
 
 /**
@@ -359,4 +404,92 @@ public interface PTable extends PMetaDataEntity {
  * you are also not allowed to 

[20/42] phoenix git commit: Fail-fast iterators for EncodedColumnQualifierCellsList. Use list iterators instead of get(index) for navigating lists. Use HBase bytes utility for encoded column names. Fi

2016-12-22 Thread tdsilva
http://git-wip-us.apache.org/repos/asf/phoenix/blob/670b53a9/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
index 15d6d2f..c5f690b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/FormatToKeyValueReducer.java
@@ -44,6 +44,7 @@ import org.apache.phoenix.query.QueryConstants;
 import org.apache.phoenix.schema.PColumn;
 import org.apache.phoenix.schema.PTable;
 import org.apache.phoenix.util.Closeables;
+import org.apache.phoenix.util.EncodedColumnsUtil;
 import org.apache.phoenix.util.PhoenixRuntime;
 import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
@@ -89,7 +90,7 @@ public class FormatToKeyValueReducer
 }
 
 private void initColumnsMap(PhoenixConnection conn) throws SQLException {
-Map indexMap = new TreeMap(Bytes.BYTES_COMPARATOR);
+Map indexMap = new TreeMap<>(Bytes.BYTES_COMPARATOR);
 columnIndexes = new HashMap<>();
 int columnIndex = 0;
 for (int index = 0; index < logicalNames.size(); index++) {
@@ -98,12 +99,16 @@ public class FormatToKeyValueReducer
 for (int i = 0; i < cls.size(); i++) {
 PColumn c = cls.get(i);
 byte[] family = new byte[0];
-if (c.getFamilyName() != null) {
+byte[] cq;
+if (!SchemaUtil.isPKColumn(c)) {
 family = c.getFamilyName().getBytes();
+cq = EncodedColumnsUtil.getColumnQualifier(c, table);
+} else {
+// TODO: samarth verify if this is the right thing to do 
here.
+cq = c.getName().getBytes();
 }
-byte[] name = c.getName().getBytes();
-byte[] cfn = Bytes.add(family, 
QueryConstants.NAMESPACE_SEPARATOR_BYTES, name);
-Pair pair = new Pair(family, name);
+byte[] cfn = Bytes.add(family, 
QueryConstants.NAMESPACE_SEPARATOR_BYTES, cq);
+Pair pair = new Pair<>(family, cq);
 if (!indexMap.containsKey(cfn)) {
 indexMap.put(cfn, new Integer(columnIndex));
 columnIndexes.put(new Integer(columnIndex), pair);
@@ -111,8 +116,8 @@ public class FormatToKeyValueReducer
 }
 }
 byte[] emptyColumnFamily = SchemaUtil.getEmptyColumnFamily(table);
-Pair pair = new Pair(emptyColumnFamily, 
QueryConstants
-.EMPTY_COLUMN_BYTES);
+byte[] emptyKeyValue = 
EncodedColumnsUtil.getEmptyKeyValueInfo(table).getFirst();
+Pair pair = new Pair<>(emptyColumnFamily, 
emptyKeyValue);
 columnIndexes.put(new Integer(columnIndex), pair);
 columnIndex++;
 }
@@ -123,18 +128,17 @@ public class FormatToKeyValueReducer
   Reducer.Context context)
 throws IOException, InterruptedException {
 TreeSet map = new TreeSet(KeyValue.COMPARATOR);
-ImmutableBytesWritable rowKey = key.getRowkey();
 for (ImmutableBytesWritable aggregatedArray : values) {
 DataInputStream input = new DataInputStream(new 
ByteArrayInputStream(aggregatedArray.get()));
 while (input.available() != 0) {
 byte type = input.readByte();
 int index = WritableUtils.readVInt(input);
 ImmutableBytesWritable family;
-ImmutableBytesWritable name;
+ImmutableBytesWritable cq;
 ImmutableBytesWritable value = 
QueryConstants.EMPTY_COLUMN_VALUE_BYTES_PTR;
 Pair pair = columnIndexes.get(index);
 family = new ImmutableBytesWritable(pair.getFirst());
-name = new ImmutableBytesWritable(pair.getSecond());
+cq = new ImmutableBytesWritable(pair.getSecond());
 int len = WritableUtils.readVInt(input);
 if (len > 0) {
 byte[] array = new byte[len];
@@ -145,10 +149,10 @@ public class FormatToKeyValueReducer
 KeyValue.Type kvType = KeyValue.Type.codeToType(type);
 switch (kvType) {
 case Put: // not null value
-kv = builder.buildPut(key.getRowkey(), family, name, 
value);
+kv = builder.buildPut(key.getRowkey(), family, cq, 
value);
 break;
  

[09/42] phoenix git commit: PHOENIX-3517 Switch sqlline-thin.py to use argparse

2016-12-22 Thread tdsilva
PHOENIX-3517 Switch sqlline-thin.py to use argparse


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/42a84b33
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/42a84b33
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/42a84b33

Branch: refs/heads/encodecolumns2
Commit: 42a84b338d73ef43005df8933e812545bb6be845
Parents: d2a2241
Author: Josh Elser 
Authored: Sat Dec 3 17:33:46 2016 -0500
Committer: Josh Elser 
Committed: Mon Dec 19 10:23:20 2016 -0500

--
 bin/sqlline-thin.py | 49 +---
 1 file changed, 30 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/42a84b33/bin/sqlline-thin.py
--
diff --git a/bin/sqlline-thin.py b/bin/sqlline-thin.py
index 12501dc..e4cb540 100755
--- a/bin/sqlline-thin.py
+++ b/bin/sqlline-thin.py
@@ -25,6 +25,7 @@ import sys
 import phoenix_utils
 import atexit
 import urlparse
+import argparse
 
 global childProc
 childProc = None
@@ -36,10 +37,25 @@ def kill_child():
 os.system("reset")
 atexit.register(kill_child)
 
+parser = argparse.ArgumentParser(description='Launches the Apache Phoenix Thin 
Client.')
+# Positional argument "url" is optional
+parser.add_argument('url', nargs='?', help='The URL to the Phoenix Query 
Server.', default='http://localhost:8765')
+# Positional argument "sqlfile" is optional
+parser.add_argument('sqlfile', nargs='?', help='A file of SQL commands to 
execute.', default='')
+parser.add_argument('-u', '--user', help='Username for database authentication 
(unsupported).', default='none')
+parser.add_argument('-p', '--password', help='Password for database 
authentication (unsupported).', default='none')
+parser.add_argument('-a', '--authentication', help='Mechanism for HTTP 
authentication.', choices=('SPNEGO', 'BASIC', 'DIGEST', 'NONE'), default='')
+parser.add_argument('-s', '--serialization', help='Serialization type for HTTP 
API.', choices=('PROTOBUF', 'JSON'), default=None)
+parser.add_argument('-au', '--auth-user', help='Username for HTTP 
authentication.')
+parser.add_argument('-ap', '--auth-password', help='Password for HTTP 
authentication.')
+parser.add_argument('-v', '--verbose', help='Verbosity on sqlline.', 
default='true')
+parser.add_argument('-c', '--color', help='Color setting for sqlline.', 
default='true')
+args=parser.parse_args()
+
 phoenix_utils.setPath()
 
-url = "localhost:8765"
-sqlfile = ""
+url = args.url
+sqlfile = args.sqlfile
 serialization_key = 'phoenix.queryserver.serialization'
 
 def usage_and_exit():
@@ -86,25 +102,12 @@ def get_serialization():
 return default_serialization
 return stdout
 
-if len(sys.argv) == 1:
-pass
-elif len(sys.argv) == 2:
-if os.path.isfile(sys.argv[1]):
-sqlfile = sys.argv[1]
-else:
-url = sys.argv[1]
-elif len(sys.argv) == 3:
-url = sys.argv[1]
-sqlfile = sys.argv[2]
-else:
-usage_and_exit()
-
 url = cleanup_url(url)
 
 if sqlfile != "":
 sqlfile = "--run=" + sqlfile
 
-colorSetting = "true"
+colorSetting = args.color
 # disable color setting for windows OS
 if os.name == 'nt':
 colorSetting = "false"
@@ -113,7 +116,7 @@ if os.name == 'nt':
 # HBase/Phoenix client side property override
 hbase_config_path = os.getenv('HBASE_CONF_DIR', phoenix_utils.current_dir)
 
-serialization = get_serialization()
+serialization = args.serialization if args.serialization else 
get_serialization()
 
 java_home = os.getenv('JAVA_HOME')
 
@@ -145,13 +148,21 @@ if java_home:
 else:
 java = 'java'
 
+jdbc_url = 'jdbc:phoenix:thin:url=' + url + ';serialization=' + serialization
+if args.authentication:
+jdbc_url += ';authentication=' + args.authentication
+if args.auth_user:
+jdbc_url += ';avatica_user=' + args.auth_user
+if args.auth_password:
+jdbc_url += ';avatica_password=' + args.auth_password
+
 java_cmd = java + ' $PHOENIX_OPTS ' + \
 ' -cp "' + phoenix_utils.hbase_conf_dir + os.pathsep + 
phoenix_utils.phoenix_thin_client_jar + \
 os.pathsep + phoenix_utils.hadoop_conf + os.pathsep + 
phoenix_utils.hadoop_classpath + '" -Dlog4j.configuration=file:' + \
 os.path.join(phoenix_utils.current_dir, "log4j.properties") + \
 " org.apache.phoenix.queryserver.client.SqllineWrapper -d 
org.apache.phoenix.queryserver.client.Driver " + \
-" -u \"jdbc:phoenix:thin:url=" + url + ";serialization=" + serialization + 
"\"" + \
-" -n none -p none --color=" + colorSetting + " --fastConnect=false 
--verbose=true " + \
+' -u "' + jdbc_url + '"' + " -n " + args.user + " -p " + args.password + \
+" --color=" + colorSetting + " --fastConnect=false --verbose=" + 
args.verbose + \
 " 

[07/42] phoenix git commit: PHOENIX-3537 Clients not able to resolve sequences when the SYSTEM.SEQUENCE table is salted

2016-12-22 Thread tdsilva
PHOENIX-3537 Clients not able to resolve sequences when the SYSTEM.SEQUENCE 
table is salted


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/e966e26a
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/e966e26a
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/e966e26a

Branch: refs/heads/encodecolumns2
Commit: e966e26ad90ef80ee2c9a7f074d5389baa1520a7
Parents: 9b06c60
Author: Samarth 
Authored: Thu Dec 15 15:41:20 2016 -0800
Committer: Samarth 
Committed: Thu Dec 15 15:41:20 2016 -0800

--
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java| 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/e966e26a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index c8d42d9..c8cd04d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2461,7 +2461,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 private void createOtherSystemTables(PhoenixConnection metaConnection) 
throws SQLException {
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_SEQUENCE_METADATA);
-} catch (TableAlreadyExistsException ignore) {}
+} catch (TableAlreadyExistsException e) {
+nSequenceSaltBuckets = getSaltBuckets(e);
+}
 try {
 
metaConnection.createStatement().execute(QueryConstants.CREATE_STATS_TABLE_METADATA);
 } catch (TableAlreadyExistsException ignore) {}



[08/42] phoenix git commit: PHOENIX-3445 Add a CREATE IMMUTABLE TABLE construct to make immutable tables more explicit

2016-12-22 Thread tdsilva
PHOENIX-3445 Add a CREATE IMMUTABLE TABLE construct to make immutable tables 
more explicit


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/d2a22418
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/d2a22418
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/d2a22418

Branch: refs/heads/encodecolumns2
Commit: d2a224186ffbd605a028ac99d14bf3ac9436a5c5
Parents: e966e26
Author: Thomas D'Silva 
Authored: Mon Nov 28 17:08:27 2016 -0800
Committer: Thomas D'Silva 
Committed: Fri Dec 16 11:27:26 2016 -0800

--
 .../apache/phoenix/end2end/AlterTableIT.java| 16 ++--
 .../phoenix/end2end/AlterTableWithViewsIT.java  | 41 +
 .../phoenix/end2end/AppendOnlySchemaIT.java | 21 ++---
 .../phoenix/end2end/ImmutableTablePropIT.java   | 92 
 .../end2end/QueryDatabaseMetaDataIT.java|  3 +-
 phoenix-core/src/main/antlr3/PhoenixSQL.g   |  7 +-
 .../phoenix/exception/SQLExceptionCode.java |  4 +-
 .../apache/phoenix/jdbc/PhoenixStatement.java   |  8 +-
 .../phoenix/parse/CreateTableStatement.java | 14 ++-
 .../apache/phoenix/parse/ParseNodeFactory.java  |  4 +-
 .../org/apache/phoenix/query/QueryServices.java |  1 +
 .../apache/phoenix/schema/MetaDataClient.java   | 23 ++---
 .../apache/phoenix/schema/TableProperty.java| 50 ++-
 13 files changed, 219 insertions(+), 65 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/d2a22418/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 5da0ee7..155b6c2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -1168,9 +1168,8 @@ public class AlterTableIT extends ParallelStatsDisabledIT 
{
 assertImmutableRows(conn, dataTableFullName, true);
 ddl = "ALTER TABLE " + dataTableFullName + " SET COMPACTION_ENABLED = 
FALSE, VERSIONS = 10";
 conn.createStatement().execute(ddl);
-ddl = "ALTER TABLE " + dataTableFullName + " SET COMPACTION_ENABLED = 
FALSE, CF1.MIN_VERSIONS = 1, CF2.MIN_VERSIONS = 3, MIN_VERSIONS = 8, 
IMMUTABLE_ROWS=false, CF1.KEEP_DELETED_CELLS = true, KEEP_DELETED_CELLS = 
false";
+ddl = "ALTER TABLE " + dataTableFullName + " SET COMPACTION_ENABLED = 
FALSE, CF1.MIN_VERSIONS = 1, CF2.MIN_VERSIONS = 3, MIN_VERSIONS = 8, 
CF1.KEEP_DELETED_CELLS = true, KEEP_DELETED_CELLS = false";
 conn.createStatement().execute(ddl);
-assertImmutableRows(conn, dataTableFullName, false);
 
 try (HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin()) {
 HTableDescriptor tableDesc = 
admin.getTableDescriptor(Bytes.toBytes(dataTableFullName));
@@ -1346,11 +1345,16 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 String viewFullName = SchemaUtil.getTableName(schemaName, 
generateUniqueName());
 ddl = "CREATE VIEW " + viewFullName + " AS SELECT * FROM " + 
dataTableFullName + " WHERE CREATION_TIME = 1";
 conn1.createStatement().execute(ddl);
-ddl = "ALTER VIEW " + viewFullName + " SET IMMUTABLE_ROWS = TRUE";
+ddl = "ALTER VIEW " + viewFullName + " SET UPDATE_CACHE_FREQUENCY = 
10";
 conn1.createStatement().execute(ddl);
-assertImmutableRows(conn1, viewFullName, true);
-ddl = "ALTER VIEW " + viewFullName + " SET IMMUTABLE_ROWS = FALSE";
+conn1.createStatement().execute("SELECT * FROM " + viewFullName);
+PhoenixConnection pconn = conn1.unwrap(PhoenixConnection.class);
+assertEquals(10, pconn.getTable(new PTableKey(pconn.getTenantId(), 
viewFullName)).getUpdateCacheFrequency());
+ddl = "ALTER VIEW " + viewFullName + " SET UPDATE_CACHE_FREQUENCY = 
20";
 conn1.createStatement().execute(ddl);
+conn1.createStatement().execute("SELECT * FROM " + viewFullName);
+pconn = conn1.unwrap(PhoenixConnection.class);
+assertEquals(20, pconn.getTable(new PTableKey(pconn.getTenantId(), 
viewFullName)).getUpdateCacheFrequency());
 assertImmutableRows(conn1, viewFullName, false);
 ddl = "ALTER VIEW " + viewFullName + " SET DISABLE_WAL = TRUE";
 try {
@@ -2230,6 +2234,6 @@ public class AlterTableIT extends ParallelStatsDisabledIT 
{
 }
 }
 }
-   
+
 }
  

http://git-wip-us.apache.org/repos/asf/phoenix/blob/d2a22418/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java

[26/42] phoenix git commit: Bring back check for existing HBase table

2016-12-22 Thread tdsilva
Bring back check for existing HBase table


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/1bddaa0b
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/1bddaa0b
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/1bddaa0b

Branch: refs/heads/encodecolumns2
Commit: 1bddaa0b45d96c9cf974b48b0d2ae1fc988d8882
Parents: ffb984f
Author: Samarth 
Authored: Wed Nov 23 10:10:01 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:00:44 2016 -0800

--
 .../org/apache/phoenix/schema/MetaDataClient.java | 18 +-
 .../apache/phoenix/compile/WhereCompilerTest.java |  4 ++--
 .../apache/phoenix/execute/MutationStateTest.java |  4 ++--
 .../apache/phoenix/query/ConnectionlessTest.java  | 14 --
 4 files changed, 25 insertions(+), 15 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/1bddaa0b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
--
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
index 8936b9b..53c1c86 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
@@ -2041,10 +2041,7 @@ public class MetaDataClient {
  * We can't control what column qualifiers are used in HTable 
mapped to Phoenix views. So we are not
  * able to encode column names.
  */  
-if (viewType == MAPPED) {
-storageScheme = ONE_CELL_PER_KEYVALUE_COLUMN;
-encodingScheme = FOUR_BYTE_QUALIFIERS;
-} else {
+if (viewType != MAPPED) {
 /*
  * For regular phoenix views, use the storage scheme of 
the physical table since they all share the
  * the same HTable. Views always use the base table's 
column qualifier counter for doling out
@@ -2078,7 +2075,18 @@ public class MetaDataClient {
  * because we cannot control the column qualifiers that were 
used when populating the hbase table.
  * TODO: samarth add a test case for this
  */
-if (parent != null) {
+
+byte[] tableNameBytes = 
SchemaUtil.getTableNameAsBytes(schemaName, tableName);
+boolean tableExists = true;
+try {
+
connection.getQueryServices().getTableDescriptor(tableNameBytes);
+} catch (org.apache.phoenix.schema.TableNotFoundException e) {
+tableExists = false;
+}
+if (tableExists) {
+storageScheme = ONE_CELL_PER_KEYVALUE_COLUMN;
+encodingScheme = NON_ENCODED_QUALIFIERS;
+} else if (parent != null) {
 storageScheme = parent.getStorageScheme();
 encodingScheme = parent.getEncodingScheme();
 } else if (isImmutableRows) {

http://git-wip-us.apache.org/repos/asf/phoenix/blob/1bddaa0b/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
--
diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
index 06c20d3..c65408e 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/WhereCompilerTest.java
@@ -121,7 +121,7 @@ public class WhereCompilerTest extends 
BaseConnectionlessQueryTest {
 Filter filter = scan.getFilter();
 Expression idExpression = new ColumnRef(plan.getTableRef(), 
plan.getTableRef().getTable().getPColumnForColumnName("ID").getPosition()).newColumnExpression();
 Expression id = new RowKeyColumnExpression(idExpression,new 
RowKeyValueAccessor(plan.getTableRef().getTable().getPKColumns(),0));
-Expression company = new 
KeyValueColumnExpression(plan.getTableRef().getTable().getPColumnForColumnName("COMPANY"),
 true);
+Expression company = new 
KeyValueColumnExpression(plan.getTableRef().getTable().getPColumnForColumnName("COMPANY"),
 false);
 // FilterList has no equals implementation
 assertTrue(filter instanceof FilterList);
 FilterList filterList = (FilterList)filter;
@@ -153,7 +153,7 @@ public class WhereCompilerTest extends 
BaseConnectionlessQueryTest {
 

[38/42] phoenix git commit: PHOENIX-3445 Add a CREATE IMMUTABLE TABLE construct to make immutable tables more explicit

2016-12-22 Thread tdsilva
PHOENIX-3445 Add a CREATE IMMUTABLE TABLE construct to make immutable tables 
more explicit


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/be6861c8
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/be6861c8
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/be6861c8

Branch: refs/heads/encodecolumns2
Commit: be6861c8963d8ee05caa4c5aabc78fd05bd59605
Parents: 01ef5d5
Author: Thomas D'Silva 
Authored: Mon Nov 28 17:08:27 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 13:03:27 2016 -0800

--
 .../apache/phoenix/end2end/AlterTableIT.java| 40 +++-
 .../phoenix/end2end/AlterTableWithViewsIT.java  |  4 ++
 .../apache/phoenix/schema/MetaDataClient.java   |  4 ++
 3 files changed, 47 insertions(+), 1 deletion(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/be6861c8/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 173a3be..3084a92 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -2502,6 +2502,44 @@ public class AlterTableIT extends 
ParallelStatsDisabledIT {
 assertFalse(rs.next());
 }
 }
-   
+
+@Test
+public void 
testAlterImmutableRowsPropertyForOneCellPerKeyValueColumnStorageScheme() throws 
Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
++"ID VARCHAR(15) NOT NULL,\n"
++"CREATED_DATE DATE,\n"
++"CREATION_TIME BIGINT,\n"
++"CONSTRAINT PK PRIMARY KEY (ID))";
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.createStatement().execute(ddl);
+assertImmutableRows(conn, dataTableFullName, false);
+ddl = "ALTER TABLE " + dataTableFullName + " SET IMMUTABLE_ROWS = 
true";
+conn.createStatement().execute(ddl);
+assertImmutableRows(conn, dataTableFullName, true);
+}
+
+@Test
+public void 
testAlterImmutableRowsPropertyForOneCellPerColumnFamilyStorageScheme() throws 
Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+String ddl = "CREATE TABLE " + dataTableFullName + " (\n"
++"ID VARCHAR(15) NOT NULL,\n"
++"CREATED_DATE DATE,\n"
++"CREATION_TIME BIGINT,\n"
++"CONSTRAINT PK PRIMARY KEY (ID)) IMMUTABLE_ROWS=true";
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.createStatement().execute(ddl);
+assertImmutableRows(conn, dataTableFullName, true);
+try {
+   ddl = "ALTER TABLE " + dataTableFullName + " SET IMMUTABLE_ROWS 
= false";
+   conn.createStatement().execute(ddl);
+   fail();
+}
+catch(SQLException e) {
+   
assertEquals(SQLExceptionCode.CANNOT_ALTER_IMMUTABLE_ROWS_PROPERTY.getErrorCode(),
 e.getErrorCode());
+}
+assertImmutableRows(conn, dataTableFullName, true);
+}
+
 }
  

http://git-wip-us.apache.org/repos/asf/phoenix/blob/be6861c8/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
index e0bbb10..310071f 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end;
 
 import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -32,6 +33,7 @@ import java.sql.ResultSetMetaData;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.Collection;
+import java.util.Properties;
 
 import org.apache.commons.lang.ArrayUtils;
 import org.apache.hadoop.hbase.client.HTableInterface;
@@ -45,6 +47,8 @@ import org.apache.phoenix.schema.PNameFactory;
 import 

[13/42] phoenix git commit: PHOENIX-3355 Register Phoenix built-in functions as Calcite functions

2016-12-22 Thread tdsilva
PHOENIX-3355 Register Phoenix built-in functions as Calcite functions


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/90d27be5
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/90d27be5
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/90d27be5

Branch: refs/heads/encodecolumns2
Commit: 90d27be5b469f66e346539f5c273623a39f1c6c0
Parents: 18cb85e
Author: Eric Lomore 
Authored: Wed Dec 21 17:23:02 2016 -0800
Committer: maryannxue 
Committed: Wed Dec 21 17:23:02 2016 -0800

--
 .../org/apache/phoenix/end2end/DateTimeIT.java  | 48 ++--
 .../phoenix/end2end/DefaultColumnValueIT.java   |  6 +--
 .../phoenix/end2end/ToDateFunctionIT.java   |  2 +-
 .../phoenix/end2end/index/LocalIndexIT.java |  6 +--
 .../ByteBasedRegexpReplaceFunction.java | 11 +
 .../function/ByteBasedRegexpSplitFunction.java  | 10 
 .../function/ByteBasedRegexpSubstrFunction.java | 16 +++
 .../expression/function/CeilDateExpression.java | 14 ++
 .../function/CeilDecimalExpression.java | 13 ++
 .../expression/function/CeilFunction.java   | 10 +++-
 .../function/CeilTimestampExpression.java   | 13 ++
 .../function/CurrentDateFunction.java   |  9 
 .../function/CurrentTimeFunction.java   |  9 
 .../expression/function/FirstValueFunction.java |  4 ++
 .../function/FloorDateExpression.java   | 13 ++
 .../function/FloorDecimalExpression.java| 13 ++
 .../expression/function/FloorFunction.java  |  5 +-
 .../expression/function/LastValueFunction.java  |  4 ++
 .../function/MaxAggregateFunction.java  |  4 ++
 .../function/MinAggregateFunction.java  |  4 ++
 .../expression/function/NowFunction.java|  5 +-
 .../expression/function/NthValueFunction.java   |  4 ++
 .../function/RegexpReplaceFunction.java |  5 +-
 .../function/RegexpSplitFunction.java   |  4 +-
 .../function/RegexpSubstrFunction.java  |  5 +-
 .../function/RoundDateExpression.java   | 12 +
 .../function/RoundDecimalExpression.java| 12 +
 .../expression/function/RoundFunction.java  |  5 +-
 .../function/RoundTimestampExpression.java  | 14 +-
 .../StringBasedRegexpReplaceFunction.java   | 12 +
 .../StringBasedRegexpSplitFunction.java | 10 
 .../StringBasedRegexpSubstrFunction.java| 12 +
 .../expression/function/ToCharFunction.java | 34 ++
 .../expression/function/ToDateFunction.java | 14 ++
 .../expression/function/ToNumberFunction.java   | 34 ++
 .../expression/function/ToTimeFunction.java |  6 +++
 .../function/ToTimestampFunction.java   |  5 ++
 .../expression/function/TruncFunction.java  |  5 +-
 .../apache/phoenix/parse/FunctionParseNode.java | 31 +
 .../apache/phoenix/parse/ParseNodeFactory.java  | 38 +++-
 40 files changed, 429 insertions(+), 52 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/90d27be5/phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java
index ffcb472..74cc068 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DateTimeIT.java
@@ -411,45 +411,45 @@ public class DateTimeIT extends ParallelStatsDisabledIT {
 @Test
 public void testYearFunctionDate() throws SQLException {
 
-assertEquals(2008, callYearFunction("YEAR(TO_DATE('2008-01-01', 
'-MM-dd', 'local'))"));
+assertEquals(2008, callYearFunction("\"YEAR\"(TO_DATE('2008-01-01', 
'-MM-dd', 'local'))"));
 
 assertEquals(2004,
-callYearFunction("YEAR(TO_DATE('2004-12-13 10:13:18', '-MM-dd 
hh:mm:ss'))"));
+callYearFunction("\"YEAR\"(TO_DATE('2004-12-13 10:13:18', 
'-MM-dd hh:mm:ss'))"));
 
-assertEquals(2015, 
callYearFunction("YEAR(TO_DATE('2015-01-27T16:17:57+00:00'))"));
+assertEquals(2015, 
callYearFunction("\"YEAR\"(TO_DATE('2015-01-27T16:17:57+00:00'))"));
 
-assertEquals(2005, callYearFunction("YEAR(TO_DATE('2005-12-13 
10:13:18'))"));
+assertEquals(2005, callYearFunction("\"YEAR\"(TO_DATE('2005-12-13 
10:13:18'))"));
 
-assertEquals(2006, callYearFunction("YEAR(TO_DATE('2006-12-13'))"));
+assertEquals(2006, 
callYearFunction("\"YEAR\"(TO_DATE('2006-12-13'))"));
 
-assertEquals(2015, callYearFunction("YEAR(TO_DATE('2015-W05'))"));
+assertEquals(2015, 

Apache-Phoenix | origin/4.9-HBase-0.98 | Build Successful

2016-12-22 Thread Apache Jenkins Server
origin/4.9-HBase-0.98 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/origin/4.9-HBase-0.98

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.9/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.9/lastCompletedBuild/testReport/

Changes
[tdsilva] PHOENIX-3516 Performance Issues with queries that have compound filters



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | 4.x-HBase-1.0 | Build Successful

2016-12-22 Thread Apache Jenkins Server
4.x-HBase-1.0 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-1.0

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-1.0/lastCompletedBuild/testReport/

Changes
[tdsilva] PHOENIX-3516 Performance Issues with queries that have compound filters



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | origin/4.9-HBase-1.2 | Build Failure

2016-12-22 Thread Apache Jenkins Server
<<< text/html; charset=UTF-8: Unrecognized >>>


phoenix git commit: PHOENIX-3516 Performance Issues with queries that have compound filters and specify phoenix.query.force.rowkeyorder=true

2016-12-22 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/master bd2acd540 -> c5046047a


PHOENIX-3516 Performance Issues with queries that have compound filters and 
specify phoenix.query.force.rowkeyorder=true


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/c5046047
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/c5046047
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/c5046047

Branch: refs/heads/master
Commit: c5046047a78e0365d75bc696dff4870304c2b5b2
Parents: bd2acd5
Author: Thomas D'Silva 
Authored: Tue Dec 20 17:56:37 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 10:54:24 2016 -0800

--
 .../end2end/TenantSpecificViewIndexIT.java  | 47 
 .../apache/phoenix/compile/WhereCompiler.java   |  3 +-
 .../org/apache/phoenix/util/ExpressionUtil.java | 10 +
 .../phoenix/query/KeyRangeIntersectTest.java|  9 +++-
 4 files changed, 67 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/c5046047/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
index b7b8902..6ae1445 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceName;
 import static 
org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceSchemaName;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -28,6 +29,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.util.Bytes;
@@ -284,4 +286,49 @@ public class TenantSpecificViewIndexIT extends 
BaseTenantSpecificViewIndexIT {
 assertEquals("value1", rs.getString(1));
 assertFalse("No other rows should have been returned for the tenant", 
rs.next()); // should have just returned one record since for org1 we have only 
one row.
 }
+
+@Test
+public void testOverlappingDatesFilter() throws SQLException {
+String tenantUrl = getUrl() + ';' + TENANT_ID_ATTRIB + "=tenant1" + 
";" + QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB + "=true";
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName 
++ "(ORGANIZATION_ID CHAR(15) NOT NULL, "
++ "PARENT_TYPE CHAR(3) NOT NULL, "
++ "PARENT_ID CHAR(15) NOT NULL,"
++ "CREATED_DATE DATE NOT NULL "
++ "CONSTRAINT PK PRIMARY KEY (ORGANIZATION_ID, PARENT_TYPE, 
PARENT_ID, CREATED_DATE DESC)"
++ ") VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1"; 
+
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection viewConn = DriverManager.getConnection(tenantUrl) ) 
{
+// create table
+conn.createStatement().execute(ddl);
+// create index
+conn.createStatement().execute("CREATE INDEX IF NOT EXISTS IDX ON 
" + tableName + "(PARENT_TYPE, CREATED_DATE, PARENT_ID)");
+// create view
+viewConn.createStatement().execute("CREATE VIEW IF NOT EXISTS " + 
viewName + " AS SELECT * FROM "+ tableName );
+
+String query ="EXPLAIN SELECT PARENT_ID FROM " + viewName
++ " WHERE PARENT_TYPE='001' "
++ "AND (CREATED_DATE > to_date('2011-01-01') AND 
CREATED_DATE < to_date('2016-10-31'))"
++ "ORDER BY PARENT_TYPE,CREATED_DATE LIMIT 501";
+
+ResultSet rs = viewConn.createStatement().executeQuery(query);
+String expectedPlanFormat = "CLIENT SERIAL 1-WAY RANGE SCAN OVER 
IDX ['tenant1','001','%s 00:00:00.001'] - ['tenant1','001','%s 
00:00:00.000']" + "\n" +
+"SERVER FILTER BY FIRST KEY ONLY" + "\n" +
+"SERVER 501 ROW LIMIT" + "\n" +
+"CLIENT 501 ROW LIMIT";
+assertEquals(String.format(expectedPlanFormat, "2011-01-01", 

phoenix git commit: PHOENIX-3516 Performance Issues with queries that have compound filters and specify phoenix.query.force.rowkeyorder=true

2016-12-22 Thread tdsilva
Repository: phoenix
Updated Branches:
  refs/heads/4.9-HBase-0.98 f3117248f -> f6a8acc9d


PHOENIX-3516 Performance Issues with queries that have compound filters and 
specify phoenix.query.force.rowkeyorder=true


Project: http://git-wip-us.apache.org/repos/asf/phoenix/repo
Commit: http://git-wip-us.apache.org/repos/asf/phoenix/commit/f6a8acc9
Tree: http://git-wip-us.apache.org/repos/asf/phoenix/tree/f6a8acc9
Diff: http://git-wip-us.apache.org/repos/asf/phoenix/diff/f6a8acc9

Branch: refs/heads/4.9-HBase-0.98
Commit: f6a8acc9d0ef127d54af0e261be97319e0d04a12
Parents: f311724
Author: Thomas D'Silva 
Authored: Tue Dec 20 17:56:37 2016 -0800
Committer: Thomas D'Silva 
Committed: Thu Dec 22 10:50:30 2016 -0800

--
 .../end2end/TenantSpecificViewIndexIT.java  | 47 
 .../apache/phoenix/compile/WhereCompiler.java   |  3 +-
 .../org/apache/phoenix/util/ExpressionUtil.java | 10 +
 .../phoenix/query/KeyRangeIntersectTest.java|  9 +++-
 4 files changed, 67 insertions(+), 2 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/phoenix/blob/f6a8acc9/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
--
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
index 3519cf7..cc2e46a 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/TenantSpecificViewIndexIT.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceName;
 import static 
org.apache.phoenix.util.MetaDataUtil.getViewIndexSequenceSchemaName;
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -28,6 +29,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
 import org.apache.hadoop.hbase.util.Bytes;
@@ -284,4 +286,49 @@ public class TenantSpecificViewIndexIT extends 
BaseTenantSpecificViewIndexIT {
 assertEquals("value1", rs.getString(1));
 assertFalse("No other rows should have been returned for the tenant", 
rs.next()); // should have just returned one record since for org1 we have only 
one row.
 }
+
+@Test
+public void testOverlappingDatesFilter() throws SQLException {
+String tenantUrl = getUrl() + ';' + TENANT_ID_ATTRIB + "=tenant1" + 
";" + QueryServices.FORCE_ROW_KEY_ORDER_ATTRIB + "=true";
+String tableName = generateUniqueName();
+String viewName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName 
++ "(ORGANIZATION_ID CHAR(15) NOT NULL, "
++ "PARENT_TYPE CHAR(3) NOT NULL, "
++ "PARENT_ID CHAR(15) NOT NULL,"
++ "CREATED_DATE DATE NOT NULL "
++ "CONSTRAINT PK PRIMARY KEY (ORGANIZATION_ID, PARENT_TYPE, 
PARENT_ID, CREATED_DATE DESC)"
++ ") VERSIONS=1,MULTI_TENANT=true,REPLICATION_SCOPE=1"; 
+
+try (Connection conn = DriverManager.getConnection(getUrl());
+Connection viewConn = DriverManager.getConnection(tenantUrl) ) 
{
+// create table
+conn.createStatement().execute(ddl);
+// create index
+conn.createStatement().execute("CREATE INDEX IF NOT EXISTS IDX ON 
" + tableName + "(PARENT_TYPE, CREATED_DATE, PARENT_ID)");
+// create view
+viewConn.createStatement().execute("CREATE VIEW IF NOT EXISTS " + 
viewName + " AS SELECT * FROM "+ tableName );
+
+String query ="EXPLAIN SELECT PARENT_ID FROM " + viewName
++ " WHERE PARENT_TYPE='001' "
++ "AND (CREATED_DATE > to_date('2011-01-01') AND 
CREATED_DATE < to_date('2016-10-31'))"
++ "ORDER BY PARENT_TYPE,CREATED_DATE LIMIT 501";
+
+ResultSet rs = viewConn.createStatement().executeQuery(query);
+String expectedPlanFormat = "CLIENT SERIAL 1-WAY RANGE SCAN OVER 
IDX ['tenant1','001','%s 00:00:00.001'] - ['tenant1','001','%s 
00:00:00.000']" + "\n" +
+"SERVER FILTER BY FIRST KEY ONLY" + "\n" +
+"SERVER 501 ROW LIMIT" + "\n" +
+"CLIENT 501 ROW LIMIT";
+assertEquals(String.format(expectedPlanFormat, 

Jenkins build is back to normal : Phoenix-4.x-HBase-1.1 #298

2016-12-22 Thread Apache Jenkins Server
See 



Apache-Phoenix | 4.x-HBase-0.98 | Build Successful

2016-12-22 Thread Apache Jenkins Server
4.x-HBase-0.98 branch build status Successful

Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/4.x-HBase-0.98

Compiled Artifacts https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastSuccessfulBuild/artifact/

Test Report https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3540 Fix Time data type in Phoenix Spark integration



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout


Apache-Phoenix | Master | Build Successful

2016-12-22 Thread Apache Jenkins Server
Master branch build status Successful
Source repository https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=shortlog;h=refs/heads/master

Last Successful Compiled Artifacts https://builds.apache.org/job/Phoenix-master/lastSuccessfulBuild/artifact/

Last Complete Test Report https://builds.apache.org/job/Phoenix-master/lastCompletedBuild/testReport/

Changes
[ankitsinghal59] PHOENIX-3540 Fix Time data type in Phoenix Spark integration



Build times for last couple of runsLatest build time is the right most | Legend blue: normal, red: test failure, gray: timeout