[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a4e213d  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
a4e213d is described below

commit a4e213de1674ec9d8132e0a3ea173c62c52dfd03
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 8b71c54..5f499d8 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new f749240  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
f749240 is described below

commit f749240fe16011672df57779bc40f51dddefb011
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index 8b71c54..5f499d8 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



[phoenix] branch master updated: PHOENIX-374: Enable access to dynamic columns in * or cf.* selection (Addendum)

2019-02-28 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new af9ae19  PHOENIX-374: Enable access to dynamic columns in * or cf.* 
selection (Addendum)
af9ae19 is described below

commit af9ae1933b48b00e7ff9c4d8417678338ce4a18b
Author: Chinmay Kulkarni 
AuthorDate: Thu Feb 28 15:47:42 2019 -0800

PHOENIX-374: Enable access to dynamic columns in * or cf.* selection 
(Addendum)
---
 phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
index a7936e0..cd961da 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/schema/PTableImpl.java
@@ -1291,7 +1291,9 @@ public class PTableImpl implements PTable {
 }
 String fam = Bytes.toString(family);
 if (column.isDynamic()) {
-this.colFamToDynamicColumnsMapping.putIfAbsent(fam, new 
ArrayList<>());
+if (!this.colFamToDynamicColumnsMapping.containsKey(fam)) {
+this.colFamToDynamicColumnsMapping.put(fam, new 
ArrayList<>());
+}
 this.colFamToDynamicColumnsMapping.get(fam).add(column);
 }
 }



[phoenix] branch master updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-03-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new afb191d  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
afb191d is described below

commit afb191d16f6a155162c1a76d5ec20d5a48ba457e
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

Signed-off-by: Chinmay Kulkarni 
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-03-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 947e932  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
947e932 is described below

commit 947e93299f5ee7935b94146b8ee478589162a7d1
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

Signed-off-by: Chinmay Kulkarni 
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-03-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 1e1cf1d  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
1e1cf1d is described below

commit 1e1cf1d1ec37107a455bc3543c5fb481cbf0efb2
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

Signed-off-by: Chinmay Kulkarni 
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-03-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 0780b84  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
0780b84 is described below

commit 0780b8436d856c0778cff13f92c0f1f3de33af15
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

Signed-off-by: Chinmay Kulkarni 
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil

[phoenix] branch master updated: PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface

2019-03-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 26d3734  PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric 
interface
26d3734 is described below

commit 26d3734d3c2a05c56e92a0ea09a99fd90059f568
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 7 16:43:53 2019 -0800

PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface
---
 .../apache/phoenix/execute/PartialCommitIT.java|   3 +-
 .../monitoring/PhoenixMetricsDisabledIT.java   |   2 +-
 .../phoenix/monitoring/PhoenixMetricsIT.java   | 100 ++---
 .../apache/phoenix/monitoring/GlobalMetric.java|   1 +
 4 files changed, 53 insertions(+), 53 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
index 2b0c8b9..27f 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
@@ -52,7 +52,6 @@ import org.apache.hadoop.hbase.wal.WALEdit;
 import org.apache.phoenix.end2end.BaseUniqueNamesOwnClusterIT;
 import org.apache.phoenix.execute.MutationState.MultiRowMutationState;
 import org.apache.phoenix.hbase.index.Indexer;
-import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.monitoring.GlobalMetric;
 import org.apache.phoenix.monitoring.MetricType;
@@ -249,7 +248,7 @@ public class PartialCommitIT extends 
BaseUniqueNamesOwnClusterIT {
 assertArrayEquals(expectedUncommittedStatementIndexes, 
uncommittedStatementIndexes);
 Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(con);
 assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(bFailureTable).get(MUTATION_BATCH_FAILED_SIZE).intValue());
-assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
+assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getValue());
 }
 
 
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
index 85cf1a3..1efbc46 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
@@ -72,7 +72,7 @@ public class PhoenixMetricsDisabledIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testResetGlobalPhoenixMetrics() {
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
 assertThat(m, 
CoreMatchers.instanceOf(NoOpGlobalMetricImpl.class));
-assertEquals(NO_VALUE, m.getTotalSum());
+assertEquals(NO_VALUE, m.getValue());
 assertEquals(NO_SAMPLES, m.getNumberOfSamples());
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
index 0764ff7..e00fab3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
@@ -96,7 +96,7 @@ public class PhoenixMetricsIT extends BasePhoenixMetricsIT {
 resetGlobalMetrics();
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
 assertThat(m, 
CoreMatchers.instanceOf(GlobalMetricImpl.class));
-assertEquals(0, m.getTotalSum());
+assertEquals(0, m.getValue());
 assertEquals(0, m.getNumberOfSamples());
 }
 assertTrue(verifyMetricsFromSink());
@@ -114,25 +114,25 @@ public class PhoenixMetricsIT extends 
BasePhoenixMetricsIT {
 rs.getString(1);
 rs.getString(2);
 }
-assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getTotalSum());
-assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_REJECTED_TASK_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getTotal

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface

2019-03-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 2574911  PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric 
interface
2574911 is described below

commit 2574911b2b9c2a4ee71de8530eea69c387bb2ed6
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 7 18:19:05 2019 -0800

PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface
---
 .../apache/phoenix/execute/PartialCommitIT.java|  2 +-
 .../phoenix/monitoring/PhoenixMetricsIT.java   | 98 +++---
 .../apache/phoenix/monitoring/GlobalMetric.java|  1 +
 3 files changed, 51 insertions(+), 50 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
index c6dc312..06f268c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
@@ -247,7 +247,7 @@ public class PartialCommitIT extends 
BaseUniqueNamesOwnClusterIT {
 assertArrayEquals(expectedUncommittedStatementIndexes, 
uncommittedStatementIndexes);
 Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(con);
 assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(bFailureTable).get(MUTATION_BATCH_FAILED_SIZE).intValue());
-assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
+assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getValue());
 }
 
 
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
index 0882cec..9dddece 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
@@ -91,7 +91,7 @@ public class PhoenixMetricsIT extends BasePhoenixMetricsIT {
 public void testResetGlobalPhoenixMetrics() {
 resetGlobalMetrics();
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
-assertEquals(0, m.getTotalSum());
+assertEquals(0, m.getValue());
 assertEquals(0, m.getNumberOfSamples());
 }
 }
@@ -108,42 +108,42 @@ public class PhoenixMetricsIT extends 
BasePhoenixMetricsIT {
 rs.getString(1);
 rs.getString(2);
 }
-assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getTotalSum());
-assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_REJECTED_TASK_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
-
-assertTrue(GLOBAL_SCAN_BYTES.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_QUERY_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_END_TO_END_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_EXECUTION_TIME.getMetric().getTotalSum() > 0);
-
-assertTrue(GLOBAL_HBASE_COUNT_RPC_CALLS.getMetric().getTotalSum() > 0);
-
assertTrue(GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS.getMetric().getTotalSum() > 
0);
-
assertTrue(GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS.getMetric().getTotalSum()
 > 0);
-
assertTrue(GLOBAL_HBASE_COUNT_SCANNED_REGIONS.getMetric().getTotalSum() > 0);
+assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getValue());
+assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_REJECTED_TASK_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getValue());
+assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getValue());
+assertEquals(0, 
GLOBAL_MUTATION_BATCH_FAILED

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface

2019-03-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5d3a15a  PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric 
interface
5d3a15a is described below

commit 5d3a15a76b9b84c562a624f80c288a09c0a3f9ed
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 7 18:19:05 2019 -0800

PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface
---
 .../apache/phoenix/execute/PartialCommitIT.java|  2 +-
 .../phoenix/monitoring/PhoenixMetricsIT.java   | 98 +++---
 .../apache/phoenix/monitoring/GlobalMetric.java|  1 +
 3 files changed, 51 insertions(+), 50 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
index c6dc312..06f268c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
@@ -247,7 +247,7 @@ public class PartialCommitIT extends 
BaseUniqueNamesOwnClusterIT {
 assertArrayEquals(expectedUncommittedStatementIndexes, 
uncommittedStatementIndexes);
 Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(con);
 assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(bFailureTable).get(MUTATION_BATCH_FAILED_SIZE).intValue());
-assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
+assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getValue());
 }
 
 
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
index 0882cec..9dddece 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
@@ -91,7 +91,7 @@ public class PhoenixMetricsIT extends BasePhoenixMetricsIT {
 public void testResetGlobalPhoenixMetrics() {
 resetGlobalMetrics();
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
-assertEquals(0, m.getTotalSum());
+assertEquals(0, m.getValue());
 assertEquals(0, m.getNumberOfSamples());
 }
 }
@@ -108,42 +108,42 @@ public class PhoenixMetricsIT extends 
BasePhoenixMetricsIT {
 rs.getString(1);
 rs.getString(2);
 }
-assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getTotalSum());
-assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_REJECTED_TASK_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
-
-assertTrue(GLOBAL_SCAN_BYTES.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_QUERY_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_END_TO_END_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_EXECUTION_TIME.getMetric().getTotalSum() > 0);
-
-assertTrue(GLOBAL_HBASE_COUNT_RPC_CALLS.getMetric().getTotalSum() > 0);
-
assertTrue(GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS.getMetric().getTotalSum() > 
0);
-
assertTrue(GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS.getMetric().getTotalSum()
 > 0);
-
assertTrue(GLOBAL_HBASE_COUNT_SCANNED_REGIONS.getMetric().getTotalSum() > 0);
+assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getValue());
+assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_REJECTED_TASK_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getValue());
+assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getValue());
+assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getValue());
+assertEquals(0, 
GLOBAL_MUTATION_BATCH_FAILED

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface

2019-03-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new eb3de54  PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric 
interface
eb3de54 is described below

commit eb3de54a9c6ea80de79d70f9968af01930fd4200
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 7 16:43:53 2019 -0800

PHOENIX-5182: Deprecate getTotalSum API of the GlobalMetric interface
---
 .../apache/phoenix/execute/PartialCommitIT.java|   2 +-
 .../monitoring/PhoenixMetricsDisabledIT.java   |   2 +-
 .../phoenix/monitoring/PhoenixMetricsIT.java   | 100 ++---
 .../apache/phoenix/monitoring/GlobalMetric.java|   1 +
 4 files changed, 53 insertions(+), 52 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
index c6dc312..06f268c 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
@@ -247,7 +247,7 @@ public class PartialCommitIT extends 
BaseUniqueNamesOwnClusterIT {
 assertArrayEquals(expectedUncommittedStatementIndexes, 
uncommittedStatementIndexes);
 Map> mutationWriteMetrics = 
PhoenixRuntime.getWriteMetricInfoForMutationsSinceLastReset(con);
 assertEquals(expectedUncommittedStatementIndexes.length, 
mutationWriteMetrics.get(bFailureTable).get(MUTATION_BATCH_FAILED_SIZE).intValue());
-assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
+assertEquals(expectedUncommittedStatementIndexes.length, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getValue());
 }
 
 
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
index 85cf1a3..1efbc46 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsDisabledIT.java
@@ -72,7 +72,7 @@ public class PhoenixMetricsDisabledIT extends 
BaseUniqueNamesOwnClusterIT {
 public void testResetGlobalPhoenixMetrics() {
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
 assertThat(m, 
CoreMatchers.instanceOf(NoOpGlobalMetricImpl.class));
-assertEquals(NO_VALUE, m.getTotalSum());
+assertEquals(NO_VALUE, m.getValue());
 assertEquals(NO_SAMPLES, m.getNumberOfSamples());
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
index 0764ff7..e00fab3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/monitoring/PhoenixMetricsIT.java
@@ -96,7 +96,7 @@ public class PhoenixMetricsIT extends BasePhoenixMetricsIT {
 resetGlobalMetrics();
 for (GlobalMetric m : PhoenixRuntime.getGlobalPhoenixClientMetrics()) {
 assertThat(m, 
CoreMatchers.instanceOf(GlobalMetricImpl.class));
-assertEquals(0, m.getTotalSum());
+assertEquals(0, m.getValue());
 assertEquals(0, m.getNumberOfSamples());
 }
 assertTrue(verifyMetricsFromSink());
@@ -114,25 +114,25 @@ public class PhoenixMetricsIT extends 
BasePhoenixMetricsIT {
 rs.getString(1);
 rs.getString(2);
 }
-assertEquals(1, GLOBAL_NUM_PARALLEL_SCANS.getMetric().getTotalSum());
-assertEquals(1, GLOBAL_SELECT_SQL_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_REJECTED_TASK_COUNTER.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_QUERY_TIMEOUT_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_FAILED_QUERY_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_SPOOL_FILE_COUNTER.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BATCH_SIZE.getMetric().getTotalSum());
-assertEquals(0, GLOBAL_MUTATION_BYTES.getMetric().getTotalSum());
-assertEquals(0, 
GLOBAL_MUTATION_BATCH_FAILED_COUNT.getMetric().getTotalSum());
-
-assertTrue(GLOBAL_SCAN_BYTES.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_QUERY_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_END_TO_END_TIME.getMetric().getTotalSum() > 0);
-assertTrue(GLOBAL_TASK_EXECUTION_TIME.getMetric().getTotalSum() > 0);
-
-   

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5131 Make spilling to disk for order/group by configurable

2019-03-14 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 5304b56  PHOENIX-5131 Make spilling to disk for order/group by 
configurable
5304b56 is described below

commit 5304b568cc91be5de1d04c7dd550fddfff77a4ab
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Mar 14 11:49:05 2019 -0700

PHOENIX-5131 Make spilling to disk for order/group by configurable

Signed-off-by: Chinmay Kulkarni 
---
 .../java/org/apache/phoenix/end2end/OrderByIT.java |  45 +++
 ...OrderByWithServerClientSpoolingDisabledIT.java} |  17 ++-
 .../end2end/OrderByWithServerMemoryLimitIT.java|  81 
 .../phoenix/end2end/OrderByWithSpillingIT.java |   3 +-
 .../phoenix/end2end/SpooledTmpFileDeleteIT.java|   2 +-
 .../end2end/join/SortMergeJoinNoSpoolingIT.java|  83 
 .../phoenix/coprocessor/MetaDataProtocol.java  |   7 ++
 .../phoenix/coprocessor/ScanRegionObserver.java|   4 +-
 .../org/apache/phoenix/execute/AggregatePlan.java  |  28 -
 .../phoenix/execute/ClientAggregatePlan.java   |  30 -
 .../org/apache/phoenix/execute/ClientScanPlan.java |  16 ++-
 .../java/org/apache/phoenix/execute/ScanPlan.java  |  10 +-
 .../apache/phoenix/execute/SortMergeJoinPlan.java  | 139 +
 .../phoenix/hbase/index/util/VersionUtil.java  |  12 ++
 .../org/apache/phoenix/iterate/BufferedQueue.java  |  20 +--
 .../phoenix/iterate/BufferedSortedQueue.java   |  36 ++
 .../apache/phoenix/iterate/BufferedTupleQueue.java | 134 
 .../iterate/NonAggregateRegionScannerFactory.java  |  45 +--
 .../iterate/OrderedAggregatingResultIterator.java  |   5 +-
 .../phoenix/iterate/OrderedResultIterator.java |  71 +--
 .../org/apache/phoenix/iterate/PhoenixQueues.java  |  96 ++
 .../apache/phoenix/iterate/SizeAwareQueue.java}|  19 +--
 .../org/apache/phoenix/iterate/SizeBoundQueue.java |  96 ++
 .../phoenix/iterate/SpoolingResultIterator.java|   5 +-
 .../org/apache/phoenix/query/QueryServices.java|  11 +-
 .../apache/phoenix/query/QueryServicesOptions.java |  19 ++-
 .../phoenix/iterate/OrderedResultIteratorTest.java |  55 +++-
 .../phoenix/query/QueryServicesTestImpl.java   |   3 +-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  |  10 +-
 29 files changed, 881 insertions(+), 221 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 792d08f..172ed89 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -20,7 +20,10 @@ package org.apache.phoenix.end2end;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.hamcrest.CoreMatchers.containsString;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -30,6 +33,8 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -461,4 +466,44 @@ public class OrderByIT extends BaseOrderByIT {
 conn.close();
 }
 }
+
+@Test
+public void testOrderByWithClientMemoryLimit() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.CLIENT_SPOOL_THRESHOLD_BYTES_ATTRIB, 
Integer.toString(1));
+props.put(QueryServices.CLIENT_ORDERBY_SPOOLING_ENABLED_ATTRIB,
+Boolean.toString(Boolean.FALSE));
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE " + tableName + "  (a_string varchar not 
null, col1 integer"
++ "  CONSTRAINT pk PRIMARY KEY (a_string))\n";
+createTestTable(getUrl(), ddl);
+
+String dml = "UPSERT INTO " + tableName + " VALUES(?, ?)";
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setString(1, "a");
+stmt.setInt(2, 40);
+stmt.execute();
+stmt.setString(1, "b");
+stmt.setInt(2, 20);
+stmt.execute();
+stmt.setString(1, "c");
+ 

[phoenix] branch master updated: PHOENIX-5131 Make spilling to disk for order/group by configurable

2019-03-14 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 05b9901  PHOENIX-5131 Make spilling to disk for order/group by 
configurable
05b9901 is described below

commit 05b99018a77805cd92411c26bd4147e62cbf7281
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Mar 13 17:34:37 2019 -0700

PHOENIX-5131 Make spilling to disk for order/group by configurable

Signed-off-by: Chinmay Kulkarni 
---
 .../java/org/apache/phoenix/end2end/OrderByIT.java |  45 +++
 ...OrderByWithServerClientSpoolingDisabledIT.java} |  17 ++-
 .../end2end/OrderByWithServerMemoryLimitIT.java|  81 
 .../phoenix/end2end/OrderByWithSpillingIT.java |   3 +-
 .../phoenix/end2end/SpooledTmpFileDeleteIT.java|   2 +-
 .../end2end/join/SortMergeJoinNoSpoolingIT.java|  83 +
 .../phoenix/coprocessor/MetaDataProtocol.java  |   7 ++
 .../phoenix/coprocessor/ScanRegionObserver.java|   4 +-
 .../org/apache/phoenix/execute/AggregatePlan.java  |  28 -
 .../phoenix/execute/ClientAggregatePlan.java   |  30 -
 .../org/apache/phoenix/execute/ClientScanPlan.java |  16 ++-
 .../java/org/apache/phoenix/execute/ScanPlan.java  |  10 +-
 .../apache/phoenix/execute/SortMergeJoinPlan.java  | 138 +
 .../phoenix/hbase/index/util/VersionUtil.java  |  12 ++
 .../org/apache/phoenix/iterate/BufferedQueue.java  |  20 +--
 .../phoenix/iterate/BufferedSortedQueue.java   |  33 +
 .../apache/phoenix/iterate/BufferedTupleQueue.java | 134 
 .../iterate/NonAggregateRegionScannerFactory.java  |  45 +--
 .../iterate/OrderedAggregatingResultIterator.java  |   5 +-
 .../phoenix/iterate/OrderedResultIterator.java |  72 +--
 .../org/apache/phoenix/iterate/PhoenixQueues.java  |  96 ++
 .../apache/phoenix/iterate/SizeAwareQueue.java}|  19 +--
 .../org/apache/phoenix/iterate/SizeBoundQueue.java |  96 ++
 .../phoenix/iterate/SpoolingResultIterator.java|   5 +-
 .../org/apache/phoenix/query/QueryServices.java|  11 +-
 .../apache/phoenix/query/QueryServicesOptions.java |  19 ++-
 .../phoenix/iterate/OrderedResultIteratorTest.java |  55 +++-
 .../phoenix/query/QueryServicesTestImpl.java   |   3 +-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  |  10 +-
 29 files changed, 880 insertions(+), 219 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 792d08f..172ed89 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -20,7 +20,10 @@ package org.apache.phoenix.end2end;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.hamcrest.CoreMatchers.containsString;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -30,6 +33,8 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -461,4 +466,44 @@ public class OrderByIT extends BaseOrderByIT {
 conn.close();
 }
 }
+
+@Test
+public void testOrderByWithClientMemoryLimit() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.CLIENT_SPOOL_THRESHOLD_BYTES_ATTRIB, 
Integer.toString(1));
+props.put(QueryServices.CLIENT_ORDERBY_SPOOLING_ENABLED_ATTRIB,
+Boolean.toString(Boolean.FALSE));
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE " + tableName + "  (a_string varchar not 
null, col1 integer"
++ "  CONSTRAINT pk PRIMARY KEY (a_string))\n";
+createTestTable(getUrl(), ddl);
+
+String dml = "UPSERT INTO " + tableName + " VALUES(?, ?)";
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setString(1, "a");
+stmt.setInt(2, 40);
+stmt.execute();
+stmt.setString(1, "b");
+stmt.setInt(2, 20);
+stmt.execute();
+stmt.setString(1, "c");
+ 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5131 Make spilling to disk for order/group by configurable

2019-03-14 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 1f4fb66  PHOENIX-5131 Make spilling to disk for order/group by 
configurable
1f4fb66 is described below

commit 1f4fb667eba9850f9d7dac16c59f16a652c5fed8
Author: Abhishek Singh Chouhan 
AuthorDate: Thu Mar 14 11:40:32 2019 -0700

PHOENIX-5131 Make spilling to disk for order/group by configurable

Signed-off-by: Chinmay Kulkarni 
---
 .../java/org/apache/phoenix/end2end/OrderByIT.java |  45 +++
 ...OrderByWithServerClientSpoolingDisabledIT.java} |  17 ++-
 .../end2end/OrderByWithServerMemoryLimitIT.java|  81 
 .../phoenix/end2end/OrderByWithSpillingIT.java |   3 +-
 .../phoenix/end2end/SpooledTmpFileDeleteIT.java|   2 +-
 .../end2end/join/SortMergeJoinNoSpoolingIT.java|  83 
 .../phoenix/coprocessor/MetaDataProtocol.java  |   7 ++
 .../phoenix/coprocessor/ScanRegionObserver.java|   4 +-
 .../org/apache/phoenix/execute/AggregatePlan.java  |  28 -
 .../phoenix/execute/ClientAggregatePlan.java   |  30 -
 .../org/apache/phoenix/execute/ClientScanPlan.java |  16 ++-
 .../java/org/apache/phoenix/execute/ScanPlan.java  |  10 +-
 .../apache/phoenix/execute/SortMergeJoinPlan.java  | 139 +
 .../phoenix/hbase/index/util/VersionUtil.java  |  12 ++
 .../org/apache/phoenix/iterate/BufferedQueue.java  |  20 +--
 .../phoenix/iterate/BufferedSortedQueue.java   |  36 ++
 .../apache/phoenix/iterate/BufferedTupleQueue.java | 134 
 .../iterate/NonAggregateRegionScannerFactory.java  |  45 +--
 .../iterate/OrderedAggregatingResultIterator.java  |   5 +-
 .../phoenix/iterate/OrderedResultIterator.java |  71 +--
 .../org/apache/phoenix/iterate/PhoenixQueues.java  |  96 ++
 .../apache/phoenix/iterate/SizeAwareQueue.java}|  19 +--
 .../org/apache/phoenix/iterate/SizeBoundQueue.java |  96 ++
 .../phoenix/iterate/SpoolingResultIterator.java|   5 +-
 .../org/apache/phoenix/query/QueryServices.java|  11 +-
 .../apache/phoenix/query/QueryServicesOptions.java |  19 ++-
 .../phoenix/iterate/OrderedResultIteratorTest.java |  55 +++-
 .../phoenix/query/QueryServicesTestImpl.java   |   3 +-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  |  10 +-
 29 files changed, 881 insertions(+), 221 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 792d08f..172ed89 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -20,7 +20,10 @@ package org.apache.phoenix.end2end;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.hamcrest.CoreMatchers.containsString;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -30,6 +33,8 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -461,4 +466,44 @@ public class OrderByIT extends BaseOrderByIT {
 conn.close();
 }
 }
+
+@Test
+public void testOrderByWithClientMemoryLimit() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.CLIENT_SPOOL_THRESHOLD_BYTES_ATTRIB, 
Integer.toString(1));
+props.put(QueryServices.CLIENT_ORDERBY_SPOOLING_ENABLED_ATTRIB,
+Boolean.toString(Boolean.FALSE));
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE " + tableName + "  (a_string varchar not 
null, col1 integer"
++ "  CONSTRAINT pk PRIMARY KEY (a_string))\n";
+createTestTable(getUrl(), ddl);
+
+String dml = "UPSERT INTO " + tableName + " VALUES(?, ?)";
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setString(1, "a");
+stmt.setInt(2, 40);
+stmt.execute();
+stmt.setString(1, "b");
+stmt.setInt(2, 20);
+stmt.execute();
+stmt.setString(1, "c");
+ 

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5131 Make spilling to disk for order/group by configurable

2019-03-14 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new bc4d236  PHOENIX-5131 Make spilling to disk for order/group by 
configurable
bc4d236 is described below

commit bc4d2368fad2e7a01939be5e30e0899a0829ebec
Author: Abhishek Singh Chouhan 
AuthorDate: Wed Mar 13 18:52:19 2019 -0700

PHOENIX-5131 Make spilling to disk for order/group by configurable

Signed-off-by: Chinmay Kulkarni 
---
 .../java/org/apache/phoenix/end2end/OrderByIT.java |  45 +++
 ...OrderByWithServerClientSpoolingDisabledIT.java} |  17 ++-
 .../end2end/OrderByWithServerMemoryLimitIT.java|  81 
 .../phoenix/end2end/OrderByWithSpillingIT.java |   3 +-
 .../phoenix/end2end/SpooledTmpFileDeleteIT.java|   2 +-
 .../end2end/join/SortMergeJoinNoSpoolingIT.java|  83 
 .../phoenix/coprocessor/MetaDataProtocol.java  |   7 ++
 .../phoenix/coprocessor/ScanRegionObserver.java|   4 +-
 .../org/apache/phoenix/execute/AggregatePlan.java  |  28 -
 .../phoenix/execute/ClientAggregatePlan.java   |  30 -
 .../org/apache/phoenix/execute/ClientScanPlan.java |  16 ++-
 .../java/org/apache/phoenix/execute/ScanPlan.java  |  10 +-
 .../apache/phoenix/execute/SortMergeJoinPlan.java  | 139 +
 .../phoenix/hbase/index/util/VersionUtil.java  |  12 ++
 .../org/apache/phoenix/iterate/BufferedQueue.java  |  20 +--
 .../phoenix/iterate/BufferedSortedQueue.java   |  36 ++
 .../apache/phoenix/iterate/BufferedTupleQueue.java | 134 
 .../iterate/NonAggregateRegionScannerFactory.java  |  45 +--
 .../iterate/OrderedAggregatingResultIterator.java  |   5 +-
 .../phoenix/iterate/OrderedResultIterator.java |  71 +--
 .../org/apache/phoenix/iterate/PhoenixQueues.java  |  96 ++
 .../apache/phoenix/iterate/SizeAwareQueue.java}|  19 +--
 .../org/apache/phoenix/iterate/SizeBoundQueue.java |  96 ++
 .../phoenix/iterate/SpoolingResultIterator.java|   5 +-
 .../org/apache/phoenix/query/QueryServices.java|  11 +-
 .../apache/phoenix/query/QueryServicesOptions.java |  19 ++-
 .../phoenix/iterate/OrderedResultIteratorTest.java |  55 +++-
 .../phoenix/query/QueryServicesTestImpl.java   |   3 +-
 .../org/apache/phoenix/util/MetaDataUtilTest.java  |  10 +-
 29 files changed, 881 insertions(+), 221 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
index 792d08f..172ed89 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/OrderByIT.java
@@ -20,7 +20,10 @@ package org.apache.phoenix.end2end;
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertThat;
 import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
+import static org.hamcrest.CoreMatchers.containsString;
 
 import java.sql.Connection;
 import java.sql.Date;
@@ -30,6 +33,8 @@ import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Properties;
 
+import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
 
@@ -461,4 +466,44 @@ public class OrderByIT extends BaseOrderByIT {
 conn.close();
 }
 }
+
+@Test
+public void testOrderByWithClientMemoryLimit() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+props.put(QueryServices.CLIENT_SPOOL_THRESHOLD_BYTES_ATTRIB, 
Integer.toString(1));
+props.put(QueryServices.CLIENT_ORDERBY_SPOOLING_ENABLED_ATTRIB,
+Boolean.toString(Boolean.FALSE));
+
+try(Connection conn = DriverManager.getConnection(getUrl(), props)) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String ddl =
+"CREATE TABLE " + tableName + "  (a_string varchar not 
null, col1 integer"
++ "  CONSTRAINT pk PRIMARY KEY (a_string))\n";
+createTestTable(getUrl(), ddl);
+
+String dml = "UPSERT INTO " + tableName + " VALUES(?, ?)";
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setString(1, "a");
+stmt.setInt(2, 40);
+stmt.execute();
+stmt.setString(1, "b");
+stmt.setInt(2, 20);
+stmt.execute();
+stmt.setString(1, "c");
+ 

[phoenix] branch master updated: PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-03-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new f256004  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing 
code path, OrphanViewTool and PhoenixConfigurationUtil
f256004 is described below

commit f256004ab45217eb736fda97b3d67cc77b183735
Author: Chinmay Kulkarni 
AuthorDate: Fri Mar 8 15:31:17 2019 -0800

PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, 
OrphanViewTool and PhoenixConfigurationUtil
---
 .../UngroupedAggregateRegionObserver.java  |  22 +++-
 .../hbase/index/write/RecoveryIndexWriter.java |  30 --
 .../phoenix/mapreduce/AbstractBulkLoadTool.java| 114 +++--
 .../apache/phoenix/mapreduce/OrphanViewTool.java   |  73 -
 .../phoenix/mapreduce/PhoenixRecordWriter.java |  18 +++-
 .../mapreduce/index/DirectHTableWriter.java|  19 +++-
 .../mapreduce/index/IndexScrutinyMapper.java   |  25 -
 .../apache/phoenix/mapreduce/index/IndexTool.java  |  85 +++
 .../index/PhoenixIndexImportDirectMapper.java  |  26 +++--
 .../mapreduce/index/PhoenixIndexImportMapper.java  |  16 +--
 .../index/PhoenixIndexPartialBuildMapper.java  |  25 +++--
 .../mapreduce/util/PhoenixConfigurationUtil.java   |  45 
 12 files changed, 325 insertions(+), 173 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 3be4d36..6b27a88 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -29,6 +29,7 @@ import static 
org.apache.phoenix.schema.stats.StatisticsCollectionRunTracker.CON
 
 import java.io.ByteArrayInputStream;
 import java.io.ByteArrayOutputStream;
+import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.IOException;
@@ -54,6 +55,7 @@ import org.apache.hadoop.hbase.KeyValue;
 import org.apache.hadoop.hbase.NamespaceDescriptor;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
+import org.apache.hadoop.hbase.client.Connection;
 import org.apache.hadoop.hbase.client.ConnectionFactory;
 import org.apache.hadoop.hbase.client.Delete;
 import org.apache.hadoop.hbase.client.Durability;
@@ -475,13 +477,14 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 byte[] deleteCF = null;
 byte[] emptyCF = null;
 Table targetHTable = null;
+Connection targetHConn = null;
 boolean isPKChanging = false;
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 if (upsertSelectTable != null) {
 isUpsert = true;
 projectedTable = deserializeTable(upsertSelectTable);
-targetHTable =
-
ConnectionFactory.createConnection(upsertSelectConfig).getTable(
+targetHConn = 
ConnectionFactory.createConnection(upsertSelectConfig);
+targetHTable = targetHConn.getTable(
 
TableName.valueOf(projectedTable.getPhysicalName().getBytes()));
 selectExpressions = 
deserializeExpressions(scan.getAttribute(BaseScannerRegionObserver.UPSERT_SELECT_EXPRS));
 values = new byte[projectedTable.getPKColumns().size()][];
@@ -852,9 +855,8 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 }
 }
 try {
-if (targetHTable != null) {
-targetHTable.close();
-}
+tryClosingResourceSilently(targetHTable);
+tryClosingResourceSilently(targetHConn);
 } finally {
 try {
 innerScanner.close();
@@ -900,6 +902,16 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 
 }
 
+private static void tryClosingResourceSilently(Closeable res) {
+if (res != null) {
+try {
+res.close();
+} catch (IOException e) {
+logger.error("Closing resource: " + res + " failed: ", e);
+}
+}
+}
+
 private void checkForLocalIndexColumnFamilies(Region region,
 List indexMaintainers) throws IOException {
 TableDescriptor tableDesc = region.getTableDescriptor();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWrite

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-03-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 8fa2af6  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing 
code path, OrphanViewTool and PhoenixConfigurationUtil
8fa2af6 is described below

commit 8fa2af6d2d8d8485cd769c0990dc5fc0ccae9a0d
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 14 23:16:14 2019 -0700

PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, 
OrphanViewTool and PhoenixConfigurationUtil
---
 .../UngroupedAggregateRegionObserver.java  |  6 +-
 .../hbase/index/write/RecoveryIndexWriter.java | 10 +--
 .../phoenix/mapreduce/AbstractBulkLoadTool.java| 15 ++---
 .../apache/phoenix/mapreduce/OrphanViewTool.java   | 73 +-
 .../phoenix/mapreduce/PhoenixRecordWriter.java | 18 --
 .../mapreduce/index/DirectHTableWriter.java| 14 -
 .../mapreduce/index/IndexScrutinyMapper.java   | 24 +--
 .../apache/phoenix/mapreduce/index/IndexTool.java  | 55 +++-
 .../index/PhoenixIndexImportDirectMapper.java  | 26 
 .../mapreduce/index/PhoenixIndexImportMapper.java  | 16 +++--
 .../index/PhoenixIndexPartialBuildMapper.java  | 25 +---
 .../mapreduce/util/PhoenixConfigurationUtil.java   | 45 ++---
 12 files changed, 209 insertions(+), 118 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 5923a75..dc7567b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -834,7 +834,11 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 }
 try {
 if (targetHTable != null) {
-targetHTable.close();
+try {
+targetHTable.close();
+} catch (IOException e) {
+logger.error("Closing table: " + targetHTable + " 
failed: ", e);
+}
 }
 } finally {
 try {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
index 35f0a6d..fb9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
@@ -26,8 +26,6 @@ import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.MasterNotRunningException;
-import org.apache.hadoop.hbase.ZooKeeperConnectionException;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -55,15 +53,13 @@ public class RecoveryIndexWriter extends IndexWriter {
  * Directly specify the {@link IndexCommitter} and {@link 
IndexFailurePolicy}. Both are expected to be fully setup
  * before calling.
  * 
- * @param committer
  * @param policy
  * @param env
+ * @param name
  * @throws IOException
- * @throws ZooKeeperConnectionException
- * @throws MasterNotRunningException
  */
 public RecoveryIndexWriter(IndexFailurePolicy policy, 
RegionCoprocessorEnvironment env, String name)
-throws MasterNotRunningException, ZooKeeperConnectionException, 
IOException {
+throws IOException {
 super(new TrackingParallelWriterIndexCommitter(), policy, env, name);
 this.admin = new HBaseAdmin(env.getConfiguration());
 }
@@ -125,7 +121,7 @@ public class RecoveryIndexWriter extends IndexWriter {
 try {
 admin.close();
 } catch (IOException e) {
-// closing silently
+LOG.error("Closing the admin failed: ", e);
 }
 }
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
index 5252afb..8e18bf9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.mapreduce;
 
-import java.io.IOException;
 import java.sql.Connecti

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-03-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 5f70372  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing 
code path, OrphanViewTool and PhoenixConfigurationUtil
5f70372 is described below

commit 5f703725aa1f2501da84907cdcb8eddd96ab63f7
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 14 23:16:14 2019 -0700

PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, 
OrphanViewTool and PhoenixConfigurationUtil
---
 .../UngroupedAggregateRegionObserver.java  |  6 +-
 .../hbase/index/write/RecoveryIndexWriter.java | 10 +--
 .../phoenix/mapreduce/AbstractBulkLoadTool.java| 15 ++---
 .../apache/phoenix/mapreduce/OrphanViewTool.java   | 73 +-
 .../phoenix/mapreduce/PhoenixRecordWriter.java | 18 --
 .../mapreduce/index/DirectHTableWriter.java| 14 -
 .../mapreduce/index/IndexScrutinyMapper.java   | 24 +--
 .../apache/phoenix/mapreduce/index/IndexTool.java  | 55 +++-
 .../index/PhoenixIndexImportDirectMapper.java  | 26 
 .../mapreduce/index/PhoenixIndexImportMapper.java  | 16 +++--
 .../index/PhoenixIndexPartialBuildMapper.java  | 25 +---
 .../mapreduce/util/PhoenixConfigurationUtil.java   | 45 ++---
 12 files changed, 209 insertions(+), 118 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index eb8248c..a965a87 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -841,7 +841,11 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 }
 try {
 if (targetHTable != null) {
-targetHTable.close();
+try {
+targetHTable.close();
+} catch (IOException e) {
+logger.error("Closing table: " + targetHTable + " 
failed: ", e);
+}
 }
 } finally {
 try {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
index 35f0a6d..fb9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
@@ -26,8 +26,6 @@ import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.MasterNotRunningException;
-import org.apache.hadoop.hbase.ZooKeeperConnectionException;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -55,15 +53,13 @@ public class RecoveryIndexWriter extends IndexWriter {
  * Directly specify the {@link IndexCommitter} and {@link 
IndexFailurePolicy}. Both are expected to be fully setup
  * before calling.
  * 
- * @param committer
  * @param policy
  * @param env
+ * @param name
  * @throws IOException
- * @throws ZooKeeperConnectionException
- * @throws MasterNotRunningException
  */
 public RecoveryIndexWriter(IndexFailurePolicy policy, 
RegionCoprocessorEnvironment env, String name)
-throws MasterNotRunningException, ZooKeeperConnectionException, 
IOException {
+throws IOException {
 super(new TrackingParallelWriterIndexCommitter(), policy, env, name);
 this.admin = new HBaseAdmin(env.getConfiguration());
 }
@@ -125,7 +121,7 @@ public class RecoveryIndexWriter extends IndexWriter {
 try {
 admin.close();
 } catch (IOException e) {
-// closing silently
+LOG.error("Closing the admin failed: ", e);
 }
 }
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
index 5252afb..8e18bf9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.mapreduce;
 
-import java.io.IOException;
 import java.sql.Connecti

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-03-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 2e909cc  PHOENIX-5184: HBase and Phoenix connection leaks in Indexing 
code path, OrphanViewTool and PhoenixConfigurationUtil
2e909cc is described below

commit 2e909cce32e26ccfa4981d42c651ba69b96b0126
Author: Chinmay Kulkarni 
AuthorDate: Thu Mar 14 23:16:14 2019 -0700

PHOENIX-5184: HBase and Phoenix connection leaks in Indexing code path, 
OrphanViewTool and PhoenixConfigurationUtil
---
 .../UngroupedAggregateRegionObserver.java  |  6 +-
 .../hbase/index/write/RecoveryIndexWriter.java | 10 +--
 .../phoenix/mapreduce/AbstractBulkLoadTool.java| 15 ++---
 .../apache/phoenix/mapreduce/OrphanViewTool.java   | 73 +-
 .../phoenix/mapreduce/PhoenixRecordWriter.java | 18 --
 .../mapreduce/index/DirectHTableWriter.java| 14 -
 .../mapreduce/index/IndexScrutinyMapper.java   | 24 +--
 .../apache/phoenix/mapreduce/index/IndexTool.java  | 55 +++-
 .../index/PhoenixIndexImportDirectMapper.java  | 26 
 .../mapreduce/index/PhoenixIndexImportMapper.java  | 16 +++--
 .../index/PhoenixIndexPartialBuildMapper.java  | 25 +---
 .../mapreduce/util/PhoenixConfigurationUtil.java   | 45 ++---
 12 files changed, 209 insertions(+), 118 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
index 5923a75..dc7567b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java
@@ -834,7 +834,11 @@ public class UngroupedAggregateRegionObserver extends 
BaseScannerRegionObserver
 }
 try {
 if (targetHTable != null) {
-targetHTable.close();
+try {
+targetHTable.close();
+} catch (IOException e) {
+logger.error("Closing table: " + targetHTable + " 
failed: ", e);
+}
 }
 } finally {
 try {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
index 35f0a6d..fb9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/write/RecoveryIndexWriter.java
@@ -26,8 +26,6 @@ import java.util.Set;
 
 import org.apache.commons.logging.Log;
 import org.apache.commons.logging.LogFactory;
-import org.apache.hadoop.hbase.MasterNotRunningException;
-import org.apache.hadoop.hbase.ZooKeeperConnectionException;
 import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.client.Mutation;
 import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
@@ -55,15 +53,13 @@ public class RecoveryIndexWriter extends IndexWriter {
  * Directly specify the {@link IndexCommitter} and {@link 
IndexFailurePolicy}. Both are expected to be fully setup
  * before calling.
  * 
- * @param committer
  * @param policy
  * @param env
+ * @param name
  * @throws IOException
- * @throws ZooKeeperConnectionException
- * @throws MasterNotRunningException
  */
 public RecoveryIndexWriter(IndexFailurePolicy policy, 
RegionCoprocessorEnvironment env, String name)
-throws MasterNotRunningException, ZooKeeperConnectionException, 
IOException {
+throws IOException {
 super(new TrackingParallelWriterIndexCommitter(), policy, env, name);
 this.admin = new HBaseAdmin(env.getConfiguration());
 }
@@ -125,7 +121,7 @@ public class RecoveryIndexWriter extends IndexWriter {
 try {
 admin.close();
 } catch (IOException e) {
-// closing silently
+LOG.error("Closing the admin failed: ", e);
 }
 }
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
index 5252afb..8e18bf9 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/AbstractBulkLoadTool.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.mapreduce;
 
-import java.io.IOException;
 import java.sql.Connecti

[phoenix-connectors] branch master updated: PHOENIX-5197: Use extraOptions to set the configuration for Spark Workers

2019-04-09 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new 7fb90d9  PHOENIX-5197: Use extraOptions to set the configuration for 
Spark Workers
7fb90d9 is described below

commit 7fb90d947f00accf55d9c440df6e5d15b9236daf
Author: Chinmay Kulkarni 
AuthorDate: Mon Apr 8 14:28:23 2019 -0700

PHOENIX-5197: Use extraOptions to set the configuration for Spark Workers
---
 .gitignore | 30 
 phoenix-spark/src/it/resources/globalSetup.sql |  4 +-
 phoenix-spark/src/it/resources/log4j.xml   | 18 ++---
 .../spark/datasource/v2/PhoenixDataSource.java | 42 ++-
 .../v2/reader/PhoenixDataSourceReadOptions.java| 13 +++-
 .../v2/reader/PhoenixDataSourceReader.java | 22 --
 .../v2/reader/PhoenixInputPartitionReader.java |  8 +-
 .../v2/writer/PhoenixDataSourceWriteOptions.java   | 23 +-
 .../datasource/v2/writer/PhoenixDataWriter.java|  7 +-
 .../v2/writer/PhoenixDatasourceWriter.java |  1 -
 phoenix-spark/src/{it => main}/resources/log4j.xml |  0
 .../spark/datasource/v2/PhoenixDataSourceTest.java | 86 ++
 phoenix-spark/src/{it => test}/resources/log4j.xml | 18 ++---
 13 files changed, 235 insertions(+), 37 deletions(-)

diff --git a/.gitignore b/.gitignore
new file mode 100644
index 000..2f47957
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,30 @@
+#general java
+*.class
+*.war
+*.jar
+
+# python
+*.pyc
+.checkstyle
+
+# eclipse stuffs
+.settings/*
+*/.settings/
+.classpath
+.project
+*/.externalToolBuilders
+*/maven-eclipse.xml
+
+# intellij stuff
+.idea/
+*.iml
+*.ipr
+*.iws
+
+#maven stuffs
+target/
+release/
+RESULTS/
+CSV_EXPORT/
+.DS_Store
+
diff --git a/phoenix-spark/src/it/resources/globalSetup.sql 
b/phoenix-spark/src/it/resources/globalSetup.sql
index efdb8cb..2f6e9ed 100644
--- a/phoenix-spark/src/it/resources/globalSetup.sql
+++ b/phoenix-spark/src/it/resources/globalSetup.sql
@@ -60,5 +60,5 @@ UPSERT INTO "small" VALUES ('key3', 'xyz', 3)
 
 CREATE TABLE MULTITENANT_TEST_TABLE (TENANT_ID VARCHAR NOT NULL, 
ORGANIZATION_ID VARCHAR, GLOBAL_COL1 VARCHAR  CONSTRAINT pk PRIMARY KEY 
(TENANT_ID, ORGANIZATION_ID)) MULTI_TENANT=true
 CREATE TABLE IF NOT EXISTS GIGANTIC_TABLE (ID INTEGER PRIMARY KEY,unsig_id 
UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id UNSIGNED [...]
- CREATE TABLE IF NOT EXISTS OUTPUT_GIGANTIC_TABLE (ID INTEGER PRIMARY 
KEY,unsig_id UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id  [...]
- upsert into GIGANTIC_TABLE 
values(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,null,null,CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')
+CREATE TABLE IF NOT EXISTS OUTPUT_GIGANTIC_TABLE (ID INTEGER PRIMARY 
KEY,unsig_id UNSIGNED_INT,big_id BIGINT,unsig_long_id UNSIGNED_LONG,tiny_id 
TINYINT,unsig_tiny_id UNSIGNED_TINYINT,small_id SMALLINT,unsig_small_id 
UNSIGNED_SMALLINT,float_id FLOAT,unsig_float_id UNSIGNED_FLOAT,double_id 
DOUBLE,unsig_double_id UNSIGNED_DOUBLE,decimal_id DECIMAL,boolean_id 
BOOLEAN,time_id TIME,date_id DATE,timestamp_id TIMESTAMP,unsig_time_id 
UNSIGNED_TIME,unsig_date_id UNSIGNED_DATE,unsig_timestamp_id U [...]
+UPSERT INTO GIGANTIC_TABLE 
VALUES(0,2,3,4,-5,6,7,8,9.3,10.4,11.5,12.6,13.7,true,null,null,CURRENT_TIME(),CURRENT_TIME(),CURRENT_DATE(),CURRENT_TIME(),'This
 is random textA','a','a','a')
diff --git a/phoenix-spark/src/it/resources/log4j.xml 
b/phoenix-spark/src/it/resources/log4j.xml
index 10c2dc0..578a19b 100644
--- a/phoenix-spark/src/it/resources/log4j.xml
+++ b/phoenix-spark/src/it/resources/log4j.xml
@@ -32,39 +32,39 @@
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
   
 
   
-
+
 
   
 
diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/PhoenixDataSource.java
 
b/phoenix-spark/src/

[phoenix-connectors] branch master updated: PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector does not commit when mutation batch size is reached

2019-04-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new 0e5ab11  PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector 
does not commit when mutation batch size is reached
0e5ab11 is described below

commit 0e5ab11c119fbe8be0b5a82c0dfa38308851279e
Author: Chinmay Kulkarni 
AuthorDate: Wed Apr 10 15:50:42 2019 -0700

PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector does not commit 
when mutation batch size is reached
---
 phoenix-spark/pom.xml  |  4 +-
 .../org/apache/phoenix/spark/PhoenixSparkIT.scala  | 74 +-
 .../spark/datasource/v2/PhoenixDataSource.java | 35 ++
 .../v2/reader/PhoenixDataSourceReadOptions.java| 14 ++--
 .../v2/reader/PhoenixDataSourceReader.java | 14 ++--
 .../v2/reader/PhoenixInputPartition.java   | 20 --
 .../v2/reader/PhoenixInputPartitionReader.java | 28 +---
 .../v2/writer/PhoenixDataSourceWriteOptions.java   | 37 ++-
 .../datasource/v2/writer/PhoenixDataWriter.java| 28 ++--
 .../v2/writer/PhoenixDataWriterFactory.java|  6 +-
 .../v2/writer/PhoenixDatasourceWriter.java | 45 -
 .../phoenix/spark/FilterExpressionCompiler.scala   |  3 +-
 .../org/apache/phoenix/spark/PhoenixRDD.scala  |  5 +-
 .../phoenix/spark/PhoenixRecordWritable.scala  |  2 +-
 .../org/apache/phoenix/spark/PhoenixRelation.scala |  2 +-
 .../org/apache/phoenix/spark/SparkSchemaUtil.scala | 13 +++-
 .../datasources/jdbc/PhoenixJdbcDialect.scala  |  2 +-
 .../execution/datasources/jdbc/SparkJdbcUtil.scala |  5 +-
 .../datasource/v2/PhoenixTestingDataSource.java| 53 
 .../v2/reader/PhoenixTestingDataSourceReader.java} | 21 +++---
 .../v2/reader/PhoenixTestingInputPartition.java}   | 20 ++
 .../PhoenixTestingInputPartitionReader.java}   | 25 
 .../v2/writer/PhoenixTestingDataSourceWriter.java} | 30 -
 .../v2/writer/PhoenixTestingDataWriter.java}   | 30 -
 .../writer/PhoenixTestingDataWriterFactory.java}   | 12 ++--
 .../writer/PhoenixTestingWriterCommitMessage.java} | 20 +++---
 26 files changed, 367 insertions(+), 181 deletions(-)

diff --git a/phoenix-spark/pom.xml b/phoenix-spark/pom.xml
index 0bbd983..56d2111 100644
--- a/phoenix-spark/pom.xml
+++ b/phoenix-spark/pom.xml
@@ -478,8 +478,8 @@
   
 
   
-src/it/scala
-
src/it/resources
+src/test/java
+
src/test/resources
 
 
 org.apache.maven.plugins
diff --git 
a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala 
b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
index b40b638..58910ce 100644
--- a/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
+++ b/phoenix-spark/src/it/scala/org/apache/phoenix/spark/PhoenixSparkIT.scala
@@ -16,12 +16,17 @@ package org.apache.phoenix.spark
 import java.sql.DriverManager
 import java.util.Date
 
+import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil
 import org.apache.phoenix.schema.types.PVarchar
-import org.apache.phoenix.spark.datasource.v2.PhoenixDataSource
+import org.apache.phoenix.spark.datasource.v2.{PhoenixDataSource, 
PhoenixTestingDataSource}
+import 
org.apache.phoenix.spark.datasource.v2.reader.PhoenixTestingInputPartitionReader
+import 
org.apache.phoenix.spark.datasource.v2.writer.PhoenixTestingDataSourceWriter
 import org.apache.phoenix.util.{ColumnInfo, SchemaUtil}
-import org.apache.spark.sql.types._
+import org.apache.spark.SparkException
+import org.apache.spark.sql.types.{ArrayType, BinaryType, ByteType, DateType, 
IntegerType, LongType, StringType, StructField, StructType}
 import org.apache.spark.sql.{Row, SaveMode}
 
+import scala.collection.mutable
 import scala.collection.mutable.ListBuffer
 
 /**
@@ -249,6 +254,30 @@ class PhoenixSparkIT extends AbstractPhoenixSparkIT {
 count shouldEqual 1L
   }
 
+  test("Can use extraOptions to set configs for workers during reads") {
+// Pass in true, so we will get null when fetching the current row, 
leading to an NPE
+var extraOptions = PhoenixTestingInputPartitionReader.RETURN_NULL_CURR_ROW 
+ "=true"
+var rdd = spark.sqlContext.read
+  .format(PhoenixTestingDataSource.TEST_SOURCE)
+  .options( Map("table" -> "TABLE1", PhoenixDataSource.ZOOKEEPER_URL -> 
quorumAddress,
+PhoenixDataSource.PHOENIX_CONFIGS -> extraOptions)).load
+
+// Expect to get a NullPointerException in the executors
+var error = intercept[SparkException] {
+  rdd.take(2)(0)(1)
+}
+assert(error.getCause.isInstanceOf[NullPointerException])
+
+// Pass in false, so we will get the expected rows
+extraOptions = PhoenixTestingInputPartitio

[phoenix-connectors] branch master updated: PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector does not commit when mutation batch size is reached (addendum)

2019-04-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new fb4efcf  PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector 
does not commit when mutation batch size is reached (addendum)
fb4efcf is described below

commit fb4efcf1033b4aab9dbc3a88a830ca4bbc7733ef
Author: Chinmay Kulkarni 
AuthorDate: Thu Apr 11 17:47:31 2019 -0700

PHOENIX-5232: PhoenixDataWriter in Phoenix-Spark connector does not commit 
when mutation batch size is reached (addendum)
---
 .../writer/{PhoenixDatasourceWriter.java => PhoenixDataSourceWriter.java} | 0
 1 file changed, 0 insertions(+), 0 deletions(-)

diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDatasourceWriter.java
 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataSourceWriter.java
similarity index 100%
rename from 
phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDatasourceWriter.java
rename to 
phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataSourceWriter.java



[phoenix] branch 4.14-HBase-1.2 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-04-12 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.2 by this push:
 new b0a4650  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
b0a4650 is described below

commit b0a465083a045f68179038bccdfd5eb86bb602c9
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k))
+   

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-04-12 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3022d97  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
3022d97 is described below

commit 3022d9772e547705d85e37ea14fb0c25cb9e55a5
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k))
+   

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility

2019-04-12 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new bb6a9a1  PHOENIX-5122: PHOENIX-4322 breaks client backward 
compatibility
bb6a9a1 is described below

commit bb6a9a13aa67296126a75da774f75717d79a8c99
Author: Jacob Isaac 
AuthorDate: Wed Feb 27 14:11:55 2019 -0800

PHOENIX-5122: PHOENIX-4322 breaks client backward compatibility
---
 .../expression/RowValueConstructorExpression.java  | 53 +++---
 1 file changed, 47 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
index 9bb7234..c06bdc8 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/RowValueConstructorExpression.java
@@ -28,6 +28,7 @@ import java.io.DataInput;
 import java.io.DataOutput;
 import java.io.IOException;
 import java.util.Arrays;
+import java.util.BitSet;
 import java.util.List;
 
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
@@ -47,13 +48,42 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 private int partialEvalIndex = -1;
 private int estimatedByteSize;
 
+// The boolean field that indicated the object is a literal constant, 
+// has been repurposed to a bitset and now holds additional information. 
+// This is to facilitate b/w compat to 4.13 clients.
+// @see https://issues.apache.org/jira/browse/PHOENIX-5122";>PHOENIX-5122 
+private BitSet extraFields;
+
+// Important : When you want to add new bits make sure to add those 
towards the end, 
+// else will break b/w compat again.
+private enum ExtraFieldPosition {
+   
+   LITERAL_CONSTANT(0),
+   STRIP_TRAILING_SEPARATOR_BYTE(1);
+   
+   private int bitPosition;
+
+   private ExtraFieldPosition(int position) {
+   bitPosition = position;
+   }
+   
+   private int getBitPosition() {
+   return bitPosition;
+   }
+}
+
 public RowValueConstructorExpression() {
 }
 
 public RowValueConstructorExpression(List children, boolean 
isConstant) {
 super(children);
+extraFields = new BitSet(8);
+   
extraFields.set(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+if (isConstant) {
+   
extraFields.set(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
 estimatedByteSize = 0;
-init(isConstant);
+init();
 }
 
 public RowValueConstructorExpression clone(List children) {
@@ -82,24 +112,34 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 @Override
 public void readFields(DataInput input) throws IOException {
 super.readFields(input);
-init(input.readBoolean());
+extraFields = BitSet.valueOf(new byte[] {input.readByte()});
+init();
 }
 
 @Override
 public void write(DataOutput output) throws IOException {
 super.write(output);
-output.writeBoolean(literalExprPtr != null);
+byte[] b = extraFields.toByteArray();
+output.writeByte((int)(b.length > 0 ? b[0] & 0xff  : 0));
 }
 
-private void init(boolean isConstant) {
+private void init() {
 this.ptrs = new ImmutableBytesWritable[children.size()];
-if(isConstant) {
+if (isConstant()) {
 ImmutableBytesWritable ptr = new ImmutableBytesWritable();
 this.evaluate(null, ptr);
 literalExprPtr = ptr;
 }
 }
 
+private boolean isConstant() {
+   return 
extraFields.get(ExtraFieldPosition.LITERAL_CONSTANT.getBitPosition());
+}
+
+private boolean isStripTrailingSepByte() {
+   return 
extraFields.get(ExtraFieldPosition.STRIP_TRAILING_SEPARATOR_BYTE.getBitPosition());
+}
+
 @Override
 public PDataType getDataType() {
 return PVarbinary.INSTANCE;
@@ -200,7 +240,8 @@ public class RowValueConstructorExpression extends 
BaseCompoundExpression {
 for (int k = expressionCount -1 ; 
 k >=0 &&  getChildren().get(k).getDataType() != 
null 
   && 
!getChildren().get(k).getDataType().isFixedWidth()
-  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {
+  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k))
+   

[phoenix] branch master updated: PHOENIX-5187 Avoid using FileInputStream and FileOutputStream

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new e8c7857  PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
e8c7857 is described below

commit e8c78574a737a626eda329449af2fee4dee704ca
Author: Aman Poonia 
AuthorDate: Mon Mar 11 23:14:23 2019 +0530

PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
---
 .../main/java/org/apache/phoenix/cache/ServerCacheClient.java| 9 +
 .../src/main/java/org/apache/phoenix/iterate/BufferedQueue.java  | 7 +++
 .../java/org/apache/phoenix/iterate/SpoolingResultIterator.java  | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 822e255..bb96637 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -22,9 +22,10 @@ import static 
org.apache.phoenix.util.LogUtil.addCustomAnnotations;
 
 import java.io.Closeable;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.file.Files;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -148,7 +149,7 @@ public class ServerCacheClient {
 } catch (InsufficientMemoryException e) {
 this.outputFile = 
File.createTempFile("HashJoinCacheSpooler", ".bin", new File(services.getProps()
 .get(QueryServices.SPOOL_DIRECTORY, 
QueryServicesOptions.DEFAULT_SPOOL_DIRECTORY)));
-try (FileOutputStream fio = new 
FileOutputStream(outputFile)) {
+try (OutputStream fio = 
Files.newOutputStream(outputFile.toPath())) {
 fio.write(cachePtr.get(), cachePtr.getOffset(), 
cachePtr.getLength());
 }
 }
@@ -158,7 +159,7 @@ public class ServerCacheClient {
 
 public ImmutableBytesWritable getCachePtr() throws IOException {
 if(this.outputFile!=null){
-try (FileInputStream fio = new FileInputStream(outputFile)) {
+try (InputStream fio = 
Files.newInputStream(outputFile.toPath())) {
 byte[] b = new byte[this.size];
 fio.read(b);
 cachePtr = new ImmutableBytesWritable(b);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
index 1a646e6..3352641 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
@@ -23,9 +23,8 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.file.Files;
 import java.util.AbstractQueue;
 import java.util.Comparator;
 import java.util.Iterator;
@@ -304,7 +303,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 if (totalResultSize >= thresholdBytes) {
 this.file = File.createTempFile(UUID.randomUUID().toString(), 
null);
 try (DataOutputStream out = new DataOutputStream(
-new BufferedOutputStream(new FileOutputStream(file 
{
+new 
BufferedOutputStream(Files.newOutputStream(file.toPath() {
 int resSize = inMemQueue.size();
 for (int i = 0; i < resSize; i++) {
 T e = inMemQueue.poll();
@@ -342,7 +341,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 this.next = null;
 try {
 this.in = new DataInputStream(
-new BufferedInputStream(new 
FileInputStream(file)));
+new 
BufferedInputStream(Files.newInputStream(file.toPath(;
 } catch (IOException e) {
 throw new RuntimeException(e);
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
index fa90b1a..0823026 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIt

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5187 Avoid using FileInputStream and FileOutputStream

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new f71d588  PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
f71d588 is described below

commit f71d588ba2edde17a9f12fb2a47da2766d4c2939
Author: Aman Poonia 
AuthorDate: Mon Mar 11 23:14:23 2019 +0530

PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
---
 .../main/java/org/apache/phoenix/cache/ServerCacheClient.java| 9 +
 .../src/main/java/org/apache/phoenix/iterate/BufferedQueue.java  | 7 +++
 .../java/org/apache/phoenix/iterate/SpoolingResultIterator.java  | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 93d16f5..b6b53df 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -22,9 +22,10 @@ import static 
org.apache.phoenix.util.LogUtil.addCustomAnnotations;
 
 import java.io.Closeable;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.file.Files;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -148,7 +149,7 @@ public class ServerCacheClient {
 } catch (InsufficientMemoryException e) {
 this.outputFile = 
File.createTempFile("HashJoinCacheSpooler", ".bin", new File(services.getProps()
 .get(QueryServices.SPOOL_DIRECTORY, 
QueryServicesOptions.DEFAULT_SPOOL_DIRECTORY)));
-try (FileOutputStream fio = new 
FileOutputStream(outputFile)) {
+try (OutputStream fio = 
Files.newOutputStream(outputFile.toPath())) {
 fio.write(cachePtr.get(), cachePtr.getOffset(), 
cachePtr.getLength());
 }
 }
@@ -158,7 +159,7 @@ public class ServerCacheClient {
 
 public ImmutableBytesWritable getCachePtr() throws IOException {
 if(this.outputFile!=null){
-try (FileInputStream fio = new FileInputStream(outputFile)) {
+try (InputStream fio = 
Files.newInputStream(outputFile.toPath())) {
 byte[] b = new byte[this.size];
 fio.read(b);
 cachePtr = new ImmutableBytesWritable(b);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
index 1a646e6..3352641 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
@@ -23,9 +23,8 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.file.Files;
 import java.util.AbstractQueue;
 import java.util.Comparator;
 import java.util.Iterator;
@@ -304,7 +303,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 if (totalResultSize >= thresholdBytes) {
 this.file = File.createTempFile(UUID.randomUUID().toString(), 
null);
 try (DataOutputStream out = new DataOutputStream(
-new BufferedOutputStream(new FileOutputStream(file 
{
+new 
BufferedOutputStream(Files.newOutputStream(file.toPath() {
 int resSize = inMemQueue.size();
 for (int i = 0; i < resSize; i++) {
 T e = inMemQueue.poll();
@@ -342,7 +341,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 this.next = null;
 try {
 this.in = new DataInputStream(
-new BufferedInputStream(new 
FileInputStream(file)));
+new 
BufferedInputStream(Files.newInputStream(file.toPath(;
 } catch (IOException e) {
 throw new RuntimeException(e);
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
index fa90b1a..0823026 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/S

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5187 Avoid using FileInputStream and FileOutputStream

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 892fd34  PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
892fd34 is described below

commit 892fd34ce34aff6da7a5348e25e7312b5ea132c7
Author: Aman Poonia 
AuthorDate: Mon Mar 11 23:14:23 2019 +0530

PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
---
 .../main/java/org/apache/phoenix/cache/ServerCacheClient.java| 9 +
 .../src/main/java/org/apache/phoenix/iterate/BufferedQueue.java  | 7 +++
 .../java/org/apache/phoenix/iterate/SpoolingResultIterator.java  | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 93d16f5..b6b53df 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -22,9 +22,10 @@ import static 
org.apache.phoenix.util.LogUtil.addCustomAnnotations;
 
 import java.io.Closeable;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.file.Files;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -148,7 +149,7 @@ public class ServerCacheClient {
 } catch (InsufficientMemoryException e) {
 this.outputFile = 
File.createTempFile("HashJoinCacheSpooler", ".bin", new File(services.getProps()
 .get(QueryServices.SPOOL_DIRECTORY, 
QueryServicesOptions.DEFAULT_SPOOL_DIRECTORY)));
-try (FileOutputStream fio = new 
FileOutputStream(outputFile)) {
+try (OutputStream fio = 
Files.newOutputStream(outputFile.toPath())) {
 fio.write(cachePtr.get(), cachePtr.getOffset(), 
cachePtr.getLength());
 }
 }
@@ -158,7 +159,7 @@ public class ServerCacheClient {
 
 public ImmutableBytesWritable getCachePtr() throws IOException {
 if(this.outputFile!=null){
-try (FileInputStream fio = new FileInputStream(outputFile)) {
+try (InputStream fio = 
Files.newInputStream(outputFile.toPath())) {
 byte[] b = new byte[this.size];
 fio.read(b);
 cachePtr = new ImmutableBytesWritable(b);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
index 1a646e6..3352641 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
@@ -23,9 +23,8 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.file.Files;
 import java.util.AbstractQueue;
 import java.util.Comparator;
 import java.util.Iterator;
@@ -304,7 +303,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 if (totalResultSize >= thresholdBytes) {
 this.file = File.createTempFile(UUID.randomUUID().toString(), 
null);
 try (DataOutputStream out = new DataOutputStream(
-new BufferedOutputStream(new FileOutputStream(file 
{
+new 
BufferedOutputStream(Files.newOutputStream(file.toPath() {
 int resSize = inMemQueue.size();
 for (int i = 0; i < resSize; i++) {
 T e = inMemQueue.poll();
@@ -342,7 +341,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 this.next = null;
 try {
 this.in = new DataInputStream(
-new BufferedInputStream(new 
FileInputStream(file)));
+new 
BufferedInputStream(Files.newInputStream(file.toPath(;
 } catch (IOException e) {
 throw new RuntimeException(e);
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
index fa90b1a..0823026 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/S

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5187 Avoid using FileInputStream and FileOutputStream

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 87dc326  PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
87dc326 is described below

commit 87dc326adc13e3f3e8ef496a476b4684c94c19c9
Author: Aman Poonia 
AuthorDate: Mon Mar 11 23:14:23 2019 +0530

PHOENIX-5187 Avoid using FileInputStream and FileOutputStream
---
 .../main/java/org/apache/phoenix/cache/ServerCacheClient.java| 9 +
 .../src/main/java/org/apache/phoenix/iterate/BufferedQueue.java  | 7 +++
 .../java/org/apache/phoenix/iterate/SpoolingResultIterator.java  | 4 ++--
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java 
b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
index 93d16f5..b6b53df 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/cache/ServerCacheClient.java
@@ -22,9 +22,10 @@ import static 
org.apache.phoenix.util.LogUtil.addCustomAnnotations;
 
 import java.io.Closeable;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.nio.file.Files;
 import java.sql.SQLException;
 import java.util.ArrayList;
 import java.util.Collections;
@@ -148,7 +149,7 @@ public class ServerCacheClient {
 } catch (InsufficientMemoryException e) {
 this.outputFile = 
File.createTempFile("HashJoinCacheSpooler", ".bin", new File(services.getProps()
 .get(QueryServices.SPOOL_DIRECTORY, 
QueryServicesOptions.DEFAULT_SPOOL_DIRECTORY)));
-try (FileOutputStream fio = new 
FileOutputStream(outputFile)) {
+try (OutputStream fio = 
Files.newOutputStream(outputFile.toPath())) {
 fio.write(cachePtr.get(), cachePtr.getOffset(), 
cachePtr.getLength());
 }
 }
@@ -158,7 +159,7 @@ public class ServerCacheClient {
 
 public ImmutableBytesWritable getCachePtr() throws IOException {
 if(this.outputFile!=null){
-try (FileInputStream fio = new FileInputStream(outputFile)) {
+try (InputStream fio = 
Files.newInputStream(outputFile.toPath())) {
 byte[] b = new byte[this.size];
 fio.read(b);
 cachePtr = new ImmutableBytesWritable(b);
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
index 1a646e6..3352641 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/iterate/BufferedQueue.java
@@ -23,9 +23,8 @@ import java.io.Closeable;
 import java.io.DataInputStream;
 import java.io.DataOutputStream;
 import java.io.File;
-import java.io.FileInputStream;
-import java.io.FileOutputStream;
 import java.io.IOException;
+import java.nio.file.Files;
 import java.util.AbstractQueue;
 import java.util.Comparator;
 import java.util.Iterator;
@@ -304,7 +303,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 if (totalResultSize >= thresholdBytes) {
 this.file = File.createTempFile(UUID.randomUUID().toString(), 
null);
 try (DataOutputStream out = new DataOutputStream(
-new BufferedOutputStream(new FileOutputStream(file 
{
+new 
BufferedOutputStream(Files.newOutputStream(file.toPath() {
 int resSize = inMemQueue.size();
 for (int i = 0; i < resSize; i++) {
 T e = inMemQueue.poll();
@@ -342,7 +341,7 @@ public abstract class BufferedQueue extends 
AbstractQueue implements SizeA
 this.next = null;
 try {
 this.in = new DataInputStream(
-new BufferedInputStream(new 
FileInputStream(file)));
+new 
BufferedInputStream(Files.newInputStream(file.toPath(;
 } catch (IOException e) {
 throw new RuntimeException(e);
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
index fa90b1a..0823026 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/iterate/SpoolingResultIterator.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/iterate/S

[phoenix] branch master updated: PHOENIX-5235: Update SQLline version to the latest

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4ef749d  PHOENIX-5235: Update SQLline version to the latest
4ef749d is described below

commit 4ef749dd92d4af7529a07d343cb0186302979a14
Author: s.kadam 
AuthorDate: Fri Apr 19 15:05:27 2019 -0700

PHOENIX-5235: Update SQLline version to the latest
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 1440beb..77a800e 100644
--- a/pom.xml
+++ b/pom.xml
@@ -75,7 +75,7 @@
 3.8
 1.2
 1.0
-1.2.0
+1.7.0
 13.0.1
 1.4.0
 0.9.0.0



[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5235: Update SQLline version to the latest

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new b4d8cc0  PHOENIX-5235: Update SQLline version to the latest
b4d8cc0 is described below

commit b4d8cc05b53945f50fee060a806b0936a7e071a5
Author: s.kadam 
AuthorDate: Fri Apr 19 15:10:11 2019 -0700

PHOENIX-5235: Update SQLline version to the latest
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 30e48cc..32859ca 100644
--- a/pom.xml
+++ b/pom.xml
@@ -76,7 +76,7 @@
 2.5
 1.2
 1.0
-1.2.0
+1.5.0
 13.0.1
 1.4.0
 0.9.0.0



[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5235: Update SQLline version to the latest

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 100a61a  PHOENIX-5235: Update SQLline version to the latest
100a61a is described below

commit 100a61ac0764a0993eeb9d2d46f2e455f92d6f2b
Author: s.kadam 
AuthorDate: Fri Apr 19 15:10:11 2019 -0700

PHOENIX-5235: Update SQLline version to the latest
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 0e7b414..4ac9e29 100644
--- a/pom.xml
+++ b/pom.xml
@@ -76,7 +76,7 @@
 2.5
 1.2
 1.0
-1.2.0
+1.5.0
 13.0.1
 1.4.0
 0.9.0.0



[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5235: Update SQLline version to the latest

2019-04-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 87919c8  PHOENIX-5235: Update SQLline version to the latest
87919c8 is described below

commit 87919c8071c9c4e67826e8f0b9b8d845fda85e4b
Author: s.kadam 
AuthorDate: Fri Apr 19 15:10:11 2019 -0700

PHOENIX-5235: Update SQLline version to the latest
---
 pom.xml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/pom.xml b/pom.xml
index 3d0b1c6..e0ec224 100644
--- a/pom.xml
+++ b/pom.xml
@@ -76,7 +76,7 @@
 2.5
 1.2
 1.0
-1.2.0
+1.5.0
 13.0.1
 1.4.0
 0.9.0.0



[phoenix] branch master updated: PHOENIX-5252 Add job priority option to UpdateStatisticsTool

2019-04-23 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new a3fbbb8  PHOENIX-5252 Add job priority option to UpdateStatisticsTool
a3fbbb8 is described below

commit a3fbbb8fa098ae1420823bc03e5f9b5203a22962
Author: Xinyi Yan 
AuthorDate: Fri Apr 19 17:25:02 2019 -0700

PHOENIX-5252 Add job priority option to UpdateStatisticsTool
---
 .../phoenix/schema/stats/UpdateStatisticsTool.java | 32 +-
 .../schema/stats/UpdateStatisticsToolTest.java | 15 ++
 2 files changed, 46 insertions(+), 1 deletion(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
index 88b0f0a..110682d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
@@ -35,6 +35,7 @@ import org.apache.hadoop.hbase.metrics.Gauge;
 import org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl;
 import org.apache.hadoop.io.NullWritable;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.lib.db.DBInputFormat.NullDBWritable;
 import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
@@ -78,6 +79,8 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 "HBase Snapshot Name");
 private static final Option RESTORE_DIR_OPTION = new Option("d", 
"restore-dir", true,
 "Restore Directory for HBase snapshot");
+private static final Option JOB_PRIORITY_OPTION = new Option("p", 
"job-priority", true,
+"Define job priority from 0(highest) to 4");
 private static final Option RUN_FOREGROUND_OPTION =
 new Option("runfg", "run-foreground", false,
 "If specified, runs UpdateStatisticsTool in Foreground. 
Default - Runs the build in background");
@@ -90,6 +93,7 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 private String tableName;
 private String snapshotName;
 private Path restoreDir;
+private JobPriority jobPriority;
 private boolean manageSnapshot;
 private boolean isForeground;
 
@@ -164,12 +168,35 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 if (restoreDirOptionValue == null) {
 restoreDirOptionValue = getConf().get(FS_DEFAULT_NAME_KEY) + 
"/tmp";
 }
-
+
+jobPriority = getJobPriority(cmdLine);
+
 restoreDir = new Path(restoreDirOptionValue);
 manageSnapshot = cmdLine.hasOption(MANAGE_SNAPSHOT_OPTION.getOpt());
 isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
 }
 
+public String getJobPriority() {
+return this.jobPriority.toString();
+}
+
+private JobPriority getJobPriority(CommandLine cmdLine) {
+String jobPriorityOption = 
cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+ if (jobPriorityOption == null) {
+ return JobPriority.NORMAL;
+ }
+
+ switch (jobPriorityOption) {
+ case "0" : return JobPriority.VERY_HIGH;
+ case "1" : return JobPriority.HIGH;
+ case "2" : return JobPriority.NORMAL;
+ case "3" : return JobPriority.LOW;
+ case "4" : return JobPriority.VERY_LOW;
+ default:
+ return JobPriority.NORMAL;
+ }
+}
+
 private void configureJob() throws Exception {
 job = Job.getInstance(getConf(),
 "UpdateStatistics-" + tableName + "-" + snapshotName);
@@ -187,6 +214,8 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 job.setMapOutputValueClass(NullWritable.class);
 job.setOutputFormatClass(NullOutputFormat.class);
 job.setNumReduceTasks(0);
+job.setPriority(this.jobPriority);
+
 TableMapReduceUtil.addDependencyJars(job);
 TableMapReduceUtil.addDependencyJarsForClasses(job.getConfiguration(), 
PhoenixConnection.class, Chronology.class,
 CharStream.class, TransactionSystemClient.class, 
TransactionNotInProgressException.class,
@@ -265,6 +294,7 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 options.addOption(SNAPSHOT_NAME_OPTION);
 options.addOption(HELP_OPTION);
 options.addOption(RESTORE_DIR_OPTION);
+options.addOption(JOB_PRIORITY_OPTION);
   

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5252 Add job priority option to UpdateStatisticsTool

2019-04-23 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new c3aa9bf  PHOENIX-5252 Add job priority option to UpdateStatisticsTool
c3aa9bf is described below

commit c3aa9bff06a2f966ee40d9d6faa02b71dd708fde
Author: Xinyi Yan 
AuthorDate: Mon Apr 22 11:44:30 2019 -0700

PHOENIX-5252 Add job priority option to UpdateStatisticsTool
---
 .../phoenix/schema/stats/UpdateStatisticsTool.java | 34 +++---
 .../schema/stats/UpdateStatisticsToolTest.java | 15 ++
 2 files changed, 45 insertions(+), 4 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
index dfb5d13..c55c07e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/schema/stats/UpdateStatisticsTool.java
@@ -17,7 +17,6 @@
  */
 package org.apache.phoenix.schema.stats;
 
-import com.google.common.annotations.VisibleForTesting;
 import org.antlr.runtime.CharStream;
 import org.apache.commons.cli.CommandLine;
 import org.apache.commons.cli.CommandLineParser;
@@ -33,9 +32,9 @@ import org.apache.hadoop.hbase.client.HBaseAdmin;
 import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
 import org.apache.hadoop.hbase.metrics.Gauge;
 import org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl;
-import org.apache.hadoop.hbase.protobuf.generated.HBaseProtos;
 import org.apache.hadoop.io.NullWritable;
 import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.JobPriority;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.lib.db.DBInputFormat.NullDBWritable;
 import org.apache.hadoop.mapreduce.lib.output.NullOutputFormat;
@@ -62,7 +61,6 @@ import org.slf4j.LoggerFactory;
 
 import static 
org.apache.hadoop.fs.CommonConfigurationKeysPublic.FS_DEFAULT_NAME_KEY;
 import java.sql.Connection;
-import java.util.List;
 
 import static 
org.apache.phoenix.query.QueryServices.IS_NAMESPACE_MAPPING_ENABLED;
 import static 
org.apache.phoenix.query.QueryServicesOptions.DEFAULT_IS_NAMESPACE_MAPPING_ENABLED;
@@ -80,6 +78,8 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 "HBase Snapshot Name");
 private static final Option RESTORE_DIR_OPTION = new Option("d", 
"restore-dir", true,
 "Restore Directory for HBase snapshot");
+private static final Option JOB_PRIORITY_OPTION = new Option("p", 
"job-priority", true,
+"Define job priority from 0(highest) to 6");
 private static final Option RUN_FOREGROUND_OPTION =
 new Option("runfg", "run-foreground", false,
 "If specified, runs UpdateStatisticsTool in Foreground. 
Default - Runs the build in background");
@@ -92,6 +92,7 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 private String tableName;
 private String snapshotName;
 private Path restoreDir;
+private JobPriority jobPriority;
 private boolean manageSnapshot;
 private boolean isForeground;
 
@@ -166,12 +167,34 @@ public class UpdateStatisticsTool extends Configured 
implements Tool {
 if (restoreDirOptionValue == null) {
 restoreDirOptionValue = getConf().get(FS_DEFAULT_NAME_KEY) + 
"/tmp";
 }
-
+
+jobPriority = getJobPriority(cmdLine);
 restoreDir = new Path(restoreDirOptionValue);
 manageSnapshot = cmdLine.hasOption(MANAGE_SNAPSHOT_OPTION.getOpt());
 isForeground = cmdLine.hasOption(RUN_FOREGROUND_OPTION.getOpt());
 }
 
+public String getJobPriority() {
+return this.jobPriority.toString();
+}
+
+private JobPriority getJobPriority(CommandLine cmdLine) {
+String jobPriorityOption = 
cmdLine.getOptionValue(JOB_PRIORITY_OPTION.getOpt());
+if (jobPriorityOption == null) {
+return JobPriority.NORMAL;
+}
+
+switch (jobPriorityOption) {
+case "0" : return JobPriority.VERY_HIGH;
+case "1" : return JobPriority.HIGH;
+case "2" : return JobPriority.NORMAL;
+case "3" : return JobPriority.LOW;
+case "4" : return JobPriority.VERY_LOW;
+default:
+return JobPriority.NORMAL;
+}
+}
+
 private void configureJob() throws Exception {
 job = Job.getInstance(getConf(),
 "UpdateStatistics-" + tableName + "-" + snapshotName);
@@ -189,6 +212,8 @@ publi

[phoenix] branch master updated: PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs

2019-05-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4eec41f  PHOENIX-5262 Wrong Result on Salted table with some Variable 
Length PKs
4eec41f is described below

commit 4eec41f3f2b04865b6d59ebd3fbd3aa1e0a0fd80
Author: Daniel 
AuthorDate: Fri Apr 26 18:13:49 2019 -0700

PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs
---
 .../salted/SaltedTableVarLengthRowKeyIT.java   |  82 +++
 .../java/org/apache/phoenix/util/ScanUtil.java |   4 +-
 .../apache/phoenix/compile/WhereCompilerTest.java  |   5 +-
 .../java/org/apache/phoenix/util/ScanUtilTest.java | 571 +++--
 4 files changed, 382 insertions(+), 280 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
index fa43876..85d518d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
@@ -19,16 +19,20 @@
 package org.apache.phoenix.end2end.salted;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.sql.Array;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.util.Arrays;
 import java.util.Properties;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
@@ -87,4 +91,82 @@ public class SaltedTableVarLengthRowKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testSaltedVarbinaryUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k VARBINARY PRIMARY KEY, a INTEGER ) SALT_BUCKETS = 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+stmt.setBytes(1, new byte[] { 5 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 0 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 1 });
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = conn.prepareStatement(sql2);
+stmt.setBytes(1, new byte[] { 5 });
+ResultSet rs = stmt.executeQuery();
+
+assertTrue(rs.next());
+assertArrayEquals(new byte[] {5},rs.getBytes(1));
+assertEquals(1,rs.getInt(2));
+assertFalse(rs.next());
+stmt.close();
+}
+}
+
+@Test
+public void testSaltedArrayTypeUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k TINYINT ARRAY[10] PRIMARY KEY, a INTEGER ) SALT_BUCKETS 
= 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+Byte[] byteArray1 = ArrayUtils.toObject(new byte[] {5});
+Byte[] byteArray2 = ArrayUtils.toObject(new byte[] {5, -128});
+Byte[] byteArray3 = ArrayUtils.toObject(new byte[] {5, -127});
+
+
+Array array1 = conn.createArrayOf("TINYINT", byteArray1);
+Array array2 = conn.createArrayOf("TINYINT", byteArray2);
+Array array3 = conn.createArrayOf("TINYINT", byteArray3);
+
+stmt.setArray(1,array1);
+stmt.executeUpdate();
+stmt.setArray(1,array2);
+stmt.executeUpdate();
+stmt.setArray(1,array3);
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = 

[phoenix] branch 4.x-HBase-1.2 updated: PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs

2019-05-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.2
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.2 by this push:
 new 6ef5389  PHOENIX-5262 Wrong Result on Salted table with some Variable 
Length PKs
6ef5389 is described below

commit 6ef53892a55357505ff9894bd80d11ff53405191
Author: Daniel 
AuthorDate: Fri Apr 26 18:13:49 2019 -0700

PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs
---
 .../salted/SaltedTableVarLengthRowKeyIT.java   |  82 +++
 .../java/org/apache/phoenix/util/ScanUtil.java |   4 +-
 .../apache/phoenix/compile/WhereCompilerTest.java  |   5 +-
 .../java/org/apache/phoenix/util/ScanUtilTest.java | 571 +++--
 4 files changed, 382 insertions(+), 280 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
index fa43876..85d518d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
@@ -19,16 +19,20 @@
 package org.apache.phoenix.end2end.salted;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.sql.Array;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.util.Arrays;
 import java.util.Properties;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
@@ -87,4 +91,82 @@ public class SaltedTableVarLengthRowKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testSaltedVarbinaryUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k VARBINARY PRIMARY KEY, a INTEGER ) SALT_BUCKETS = 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+stmt.setBytes(1, new byte[] { 5 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 0 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 1 });
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = conn.prepareStatement(sql2);
+stmt.setBytes(1, new byte[] { 5 });
+ResultSet rs = stmt.executeQuery();
+
+assertTrue(rs.next());
+assertArrayEquals(new byte[] {5},rs.getBytes(1));
+assertEquals(1,rs.getInt(2));
+assertFalse(rs.next());
+stmt.close();
+}
+}
+
+@Test
+public void testSaltedArrayTypeUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k TINYINT ARRAY[10] PRIMARY KEY, a INTEGER ) SALT_BUCKETS 
= 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+Byte[] byteArray1 = ArrayUtils.toObject(new byte[] {5});
+Byte[] byteArray2 = ArrayUtils.toObject(new byte[] {5, -128});
+Byte[] byteArray3 = ArrayUtils.toObject(new byte[] {5, -127});
+
+
+Array array1 = conn.createArrayOf("TINYINT", byteArray1);
+Array array2 = conn.createArrayOf("TINYINT", byteArray2);
+Array array3 = conn.createArrayOf("TINYINT", byteArray3);
+
+stmt.setArray(1,array1);
+stmt.executeUpdate();
+stmt.setArray(1,array2);
+stmt.executeUpdate();
+stmt.setArray(1,array3);
+stmt.executeUpdate();
+stmt.close();
+

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs

2019-05-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new d844949  PHOENIX-5262 Wrong Result on Salted table with some Variable 
Length PKs
d844949 is described below

commit d8449493b59097857643a194ed69dfdfaf04d31e
Author: Daniel 
AuthorDate: Fri Apr 26 18:13:49 2019 -0700

PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs
---
 .../salted/SaltedTableVarLengthRowKeyIT.java   |  82 +++
 .../java/org/apache/phoenix/util/ScanUtil.java |   4 +-
 .../apache/phoenix/compile/WhereCompilerTest.java  |   5 +-
 .../java/org/apache/phoenix/util/ScanUtilTest.java | 571 +++--
 4 files changed, 382 insertions(+), 280 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
index fa43876..85d518d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
@@ -19,16 +19,20 @@
 package org.apache.phoenix.end2end.salted;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.sql.Array;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.util.Arrays;
 import java.util.Properties;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
@@ -87,4 +91,82 @@ public class SaltedTableVarLengthRowKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testSaltedVarbinaryUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k VARBINARY PRIMARY KEY, a INTEGER ) SALT_BUCKETS = 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+stmt.setBytes(1, new byte[] { 5 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 0 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 1 });
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = conn.prepareStatement(sql2);
+stmt.setBytes(1, new byte[] { 5 });
+ResultSet rs = stmt.executeQuery();
+
+assertTrue(rs.next());
+assertArrayEquals(new byte[] {5},rs.getBytes(1));
+assertEquals(1,rs.getInt(2));
+assertFalse(rs.next());
+stmt.close();
+}
+}
+
+@Test
+public void testSaltedArrayTypeUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k TINYINT ARRAY[10] PRIMARY KEY, a INTEGER ) SALT_BUCKETS 
= 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+Byte[] byteArray1 = ArrayUtils.toObject(new byte[] {5});
+Byte[] byteArray2 = ArrayUtils.toObject(new byte[] {5, -128});
+Byte[] byteArray3 = ArrayUtils.toObject(new byte[] {5, -127});
+
+
+Array array1 = conn.createArrayOf("TINYINT", byteArray1);
+Array array2 = conn.createArrayOf("TINYINT", byteArray2);
+Array array3 = conn.createArrayOf("TINYINT", byteArray3);
+
+stmt.setArray(1,array1);
+stmt.executeUpdate();
+stmt.setArray(1,array2);
+stmt.executeUpdate();
+stmt.setArray(1,array3);
+stmt.executeUpdate();
+stmt.close();
+

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs

2019-05-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 3f66520  PHOENIX-5262 Wrong Result on Salted table with some Variable 
Length PKs
3f66520 is described below

commit 3f66520be9d1a52f91ca4b4163c5a1b0ad9a9905
Author: Daniel 
AuthorDate: Fri Apr 26 18:13:49 2019 -0700

PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs
---
 .../salted/SaltedTableVarLengthRowKeyIT.java   |  82 +++
 .../java/org/apache/phoenix/util/ScanUtil.java |   4 +-
 .../apache/phoenix/compile/WhereCompilerTest.java  |   5 +-
 .../java/org/apache/phoenix/util/ScanUtilTest.java | 571 +++--
 4 files changed, 382 insertions(+), 280 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
index fa43876..85d518d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
@@ -19,16 +19,20 @@
 package org.apache.phoenix.end2end.salted;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.sql.Array;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.util.Arrays;
 import java.util.Properties;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
@@ -87,4 +91,82 @@ public class SaltedTableVarLengthRowKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testSaltedVarbinaryUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k VARBINARY PRIMARY KEY, a INTEGER ) SALT_BUCKETS = 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+stmt.setBytes(1, new byte[] { 5 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 0 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 1 });
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = conn.prepareStatement(sql2);
+stmt.setBytes(1, new byte[] { 5 });
+ResultSet rs = stmt.executeQuery();
+
+assertTrue(rs.next());
+assertArrayEquals(new byte[] {5},rs.getBytes(1));
+assertEquals(1,rs.getInt(2));
+assertFalse(rs.next());
+stmt.close();
+}
+}
+
+@Test
+public void testSaltedArrayTypeUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k TINYINT ARRAY[10] PRIMARY KEY, a INTEGER ) SALT_BUCKETS 
= 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+Byte[] byteArray1 = ArrayUtils.toObject(new byte[] {5});
+Byte[] byteArray2 = ArrayUtils.toObject(new byte[] {5, -128});
+Byte[] byteArray3 = ArrayUtils.toObject(new byte[] {5, -127});
+
+
+Array array1 = conn.createArrayOf("TINYINT", byteArray1);
+Array array2 = conn.createArrayOf("TINYINT", byteArray2);
+Array array3 = conn.createArrayOf("TINYINT", byteArray3);
+
+stmt.setArray(1,array1);
+stmt.executeUpdate();
+stmt.setArray(1,array2);
+stmt.executeUpdate();
+stmt.setArray(1,array3);
+stmt.executeUpdate();
+stmt.close();
+

[phoenix] branch phoenix-stats updated: PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs

2019-05-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch phoenix-stats
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/phoenix-stats by this push:
 new 14b9f80  PHOENIX-5262 Wrong Result on Salted table with some Variable 
Length PKs
14b9f80 is described below

commit 14b9f80eb19f87597bb089a806cc689ecd641958
Author: Daniel 
AuthorDate: Fri Apr 26 18:13:49 2019 -0700

PHOENIX-5262 Wrong Result on Salted table with some Variable Length PKs
---
 .../salted/SaltedTableVarLengthRowKeyIT.java   |  82 +++
 .../java/org/apache/phoenix/util/ScanUtil.java |   4 +-
 .../apache/phoenix/compile/WhereCompilerTest.java  |   5 +-
 .../java/org/apache/phoenix/util/ScanUtilTest.java | 571 +++--
 4 files changed, 382 insertions(+), 280 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
index fa43876..85d518d 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/salted/SaltedTableVarLengthRowKeyIT.java
@@ -19,16 +19,20 @@
 package org.apache.phoenix.end2end.salted;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
 
+import java.sql.Array;
 import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.ResultSet;
+import java.util.Arrays;
 import java.util.Properties;
 
+import org.apache.commons.lang.ArrayUtils;
 import org.apache.phoenix.end2end.ParallelStatsDisabledIT;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.junit.Test;
@@ -87,4 +91,82 @@ public class SaltedTableVarLengthRowKeyIT extends 
ParallelStatsDisabledIT {
 conn.close();
 }
 }
+
+@Test
+public void testSaltedVarbinaryUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k VARBINARY PRIMARY KEY, a INTEGER ) SALT_BUCKETS = 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+stmt.setBytes(1, new byte[] { 5 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 0 });
+stmt.executeUpdate();
+stmt.setBytes(1, new byte[] { 5, 1 });
+stmt.executeUpdate();
+stmt.close();
+conn.commit();
+
+stmt = conn.prepareStatement(sql2);
+stmt.setBytes(1, new byte[] { 5 });
+ResultSet rs = stmt.executeQuery();
+
+assertTrue(rs.next());
+assertArrayEquals(new byte[] {5},rs.getBytes(1));
+assertEquals(1,rs.getInt(2));
+assertFalse(rs.next());
+stmt.close();
+}
+}
+
+@Test
+public void testSaltedArrayTypeUpperBoundQuery() throws Exception {
+String tableName = generateUniqueName();
+String ddl = "CREATE TABLE " + tableName +
+" ( k TINYINT ARRAY[10] PRIMARY KEY, a INTEGER ) SALT_BUCKETS 
= 3";
+String dml = "UPSERT INTO " + tableName + " values (?, ?)";
+String sql2 = "SELECT * FROM " + tableName + " WHERE k = ?";
+
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+conn.createStatement().execute(ddl);
+PreparedStatement stmt = conn.prepareStatement(dml);
+stmt.setInt(2, 1);
+
+Byte[] byteArray1 = ArrayUtils.toObject(new byte[] {5});
+Byte[] byteArray2 = ArrayUtils.toObject(new byte[] {5, -128});
+Byte[] byteArray3 = ArrayUtils.toObject(new byte[] {5, -127});
+
+
+Array array1 = conn.createArrayOf("TINYINT", byteArray1);
+Array array2 = conn.createArrayOf("TINYINT", byteArray2);
+Array array3 = conn.createArrayOf("TINYINT", byteArray3);
+
+stmt.setArray(1,array1);
+stmt.executeUpdate();
+stmt.setArray(1,array2);
+stmt.executeUpdate();
+stmt.setArray(1,array3);
+stmt.executeUpdate();
+stmt.close();
+

[phoenix] branch phoenix-stats updated: Correct incorrectly formed UTs with wrong sized slotSpan.

2019-05-10 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch phoenix-stats
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/phoenix-stats by this push:
 new b744688  Correct incorrectly formed UTs with wrong sized slotSpan.
b744688 is described below

commit b744688b6123745e0b293ca0dfa2567d56acd883
Author: Daniel 
AuthorDate: Tue May 7 11:42:57 2019 -0700

Correct incorrectly formed UTs with wrong sized slotSpan.
---
 .../java/org/apache/phoenix/compile/ScanRangesIntersectTest.java| 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/ScanRangesIntersectTest.java
 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/ScanRangesIntersectTest.java
index df1bade..faf7a7d 100644
--- 
a/phoenix-core/src/test/java/org/apache/phoenix/compile/ScanRangesIntersectTest.java
+++ 
b/phoenix-core/src/test/java/org/apache/phoenix/compile/ScanRangesIntersectTest.java
@@ -295,7 +295,6 @@ public class ScanRangesIntersectTest {
 public void getRowKeyRangesTestNotFullyQualifiedRowKeyLookUp() {
 int rowKeySchemaFields = 2;
 RowKeySchema schema = buildSimpleRowKeySchema(rowKeySchemaFields);
-int[] slotSpan = new int[rowKeySchemaFields];
 
 String keyString1 = "A";
 String keyString2 = "B";
@@ -306,6 +305,8 @@ public class ScanRangesIntersectTest {
 List> ranges = new ArrayList<>();
 ranges.add(Lists.newArrayList(rangeKeyRange1, rangeKeyRange2));
 
+int[] slotSpan = new int[ranges.size()];
+
 ScanRanges scanRanges = ScanRanges.create(schema, ranges, slotSpan, 
null, true, -1);
 
 List rowKeyRanges = scanRanges.getRowKeyRanges();
@@ -401,7 +402,6 @@ public class ScanRangesIntersectTest {
 public void getRowKeyRangesAdjacentSubRanges() {
 int rowKeySchemaFields = 2;
 RowKeySchema schema = buildSimpleRowKeySchema(rowKeySchemaFields);
-int[] slotSpan = new int[rowKeySchemaFields];
 
 List keyRanges = new ArrayList<>();
 keyRanges.add(KeyRange.getKeyRange(stringToByteArray("A"), true,
@@ -416,6 +416,8 @@ public class ScanRangesIntersectTest {
 List> ranges = new ArrayList<>();
 ranges.add(keyRanges);
 
+int[] slotSpan = new int[ranges.size()];
+
 ScanRanges scanRanges = ScanRanges.create(schema, ranges, slotSpan, 
null, true, -1);
 
 List rowKeyRanges = scanRanges.getRowKeyRanges();



[phoenix] branch master updated: PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

2019-06-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 5303d29  PHOENIX-5311 Fix distributed cluster test resource (hbase 
table) leak
5303d29 is described below

commit 5303d292a4c875f8ee8f484094c897f8e7595e63
Author: István Tóth 
AuthorDate: Thu May 30 14:37:29 2019 +0200

PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

Change-Id: Ie325febbdf613198e2c2037760d5d6cdc79e997e
---
 .../phoenix/end2end/ParallelStatsDisabledIT.java   |  4 ++--
 .../phoenix/end2end/ParallelStatsEnabledIT.java|  4 ++--
 .../java/org/apache/phoenix/query/BaseTest.java| 25 --
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
index 8ea8dc8..a46de49 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
@@ -62,8 +62,8 @@ public abstract class ParallelStatsDisabledIT extends 
BaseTest {
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 
 protected ResultSet executeQuery(Connection conn, QueryBuilder 
queryBuilder) throws SQLException {
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
index 7028db3..a383ea1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
@@ -48,7 +48,7 @@ public abstract class ParallelStatsEnabledIT extends BaseTest 
{
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 }
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index e4a8e86..96992ee 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.query;
 
 import static 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY;
 import static org.apache.phoenix.query.QueryConstants.MILLIS_IN_DAY;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.CURRENT_SCN_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
@@ -444,7 +445,7 @@ public abstract class BaseTest {
 boolean isDistributedCluster = isDistributedClusterModeEnabled(conf);
 if (!isDistributedCluster) {
 return initMiniCluster(conf, overrideProps);
-   } else {
+} else {
 return initClusterDistributedMode(conf, overrideProps);
 }
 }
@@ -629,6 +630,11 @@ public abstract class BaseTest {
 private static PhoenixTestDriver newTestDriver(ReadOnlyProps props) throws 
Exception {
 PhoenixTestDriver newDriver;
 String driverClassName = props.get(DRIVER_CLASS_NAME_ATTRIB);
+if(isDistributedClusterModeEnabled(config)) {
+HashMap distPropMap = new HashMap<>(1);
+distPropMap.put(DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+props = new ReadOnlyProps(props, 
distPropMap.entrySet().iterator());
+}
 if (driverClassName == null) {
 newDriver = new PhoenixTestDriver(props);
 } else {
@@ -767,14 +773,21 @@ public abstract class BaseTest {
 return "S" + Integer.toString(MAX_SEQ_SUFFIX_VALUE + 
nextName).substring(1);
 }
 
-public static void tearDownMiniClusterIfBeyondThreshold() throws Exception 
{
+public static void freeResourcesIfBeyondThreshold() throws Exception {
 if (TABLE_COUNTER.get() > TEARDOWN_THRESHOLD) {
 int numTables = TABLE_COUNTER.get();
 TABLE_COUNTER.set(0);
-logger.info(
-"Shutting down mini cluster because number of tables on this 
mini cluster is likely greater than "
-   

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

2019-06-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5c49fd5  PHOENIX-5311 Fix distributed cluster test resource (hbase 
table) leak
5c49fd5 is described below

commit 5c49fd54725a77b8f3540e610b49ba46d874916d
Author: István Tóth 
AuthorDate: Thu May 30 14:37:29 2019 +0200

PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

Change-Id: Ie325febbdf613198e2c2037760d5d6cdc79e997e
---
 .../phoenix/end2end/ParallelStatsDisabledIT.java   |  4 ++--
 .../phoenix/end2end/ParallelStatsEnabledIT.java|  4 ++--
 .../java/org/apache/phoenix/query/BaseTest.java| 25 --
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
index 2fcc3ea..ca2cff9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
@@ -52,8 +52,8 @@ public abstract class ParallelStatsDisabledIT extends 
BaseTest {
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 
 protected ResultSet executeQuery(Connection conn, QueryBuilder 
queryBuilder) throws SQLException {
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
index 7028db3..a383ea1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
@@ -48,7 +48,7 @@ public abstract class ParallelStatsEnabledIT extends BaseTest 
{
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 }
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 329cde2..1197659 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.query;
 
 import static 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY;
 import static org.apache.phoenix.query.QueryConstants.MILLIS_IN_DAY;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.CURRENT_SCN_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
@@ -444,7 +445,7 @@ public abstract class BaseTest {
 boolean isDistributedCluster = isDistributedClusterModeEnabled(conf);
 if (!isDistributedCluster) {
 return initMiniCluster(conf, overrideProps);
-   } else {
+} else {
 return initClusterDistributedMode(conf, overrideProps);
 }
 }
@@ -629,6 +630,11 @@ public abstract class BaseTest {
 private static PhoenixTestDriver newTestDriver(ReadOnlyProps props) throws 
Exception {
 PhoenixTestDriver newDriver;
 String driverClassName = props.get(DRIVER_CLASS_NAME_ATTRIB);
+if(isDistributedClusterModeEnabled(config)) {
+HashMap distPropMap = new HashMap<>(1);
+distPropMap.put(DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+props = new ReadOnlyProps(props, 
distPropMap.entrySet().iterator());
+}
 if (driverClassName == null) {
 newDriver = new PhoenixTestDriver(props);
 } else {
@@ -767,14 +773,21 @@ public abstract class BaseTest {
 return "S" + Integer.toString(MAX_SEQ_SUFFIX_VALUE + 
nextName).substring(1);
 }
 
-public static void tearDownMiniClusterIfBeyondThreshold() throws Exception 
{
+public static void freeResourcesIfBeyondThreshold() throws Exception {
 if (TABLE_COUNTER.get() > TEARDOWN_THRESHOLD) {
 int numTables = TABLE_COUNTER.get();
 TABLE_COUNTER.set(0);
-logger.info(
-"Shutting down mini cluster because number of tables on this 
mini cluster is likely greater than "
-   

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

2019-06-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 0e05027  PHOENIX-5311 Fix distributed cluster test resource (hbase 
table) leak
0e05027 is described below

commit 0e05027aefa3ac7b8410f4e935a7bce7254bf25d
Author: István Tóth 
AuthorDate: Thu May 30 14:37:29 2019 +0200

PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

Change-Id: Ie325febbdf613198e2c2037760d5d6cdc79e997e
---
 .../phoenix/end2end/ParallelStatsDisabledIT.java   |  4 ++--
 .../phoenix/end2end/ParallelStatsEnabledIT.java|  4 ++--
 .../java/org/apache/phoenix/query/BaseTest.java| 25 --
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
index 8ea8dc8..a46de49 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
@@ -62,8 +62,8 @@ public abstract class ParallelStatsDisabledIT extends 
BaseTest {
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 
 protected ResultSet executeQuery(Connection conn, QueryBuilder 
queryBuilder) throws SQLException {
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
index 7028db3..a383ea1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
@@ -48,7 +48,7 @@ public abstract class ParallelStatsEnabledIT extends BaseTest 
{
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 }
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 305249b..26d2655 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.query;
 
 import static 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY;
 import static org.apache.phoenix.query.QueryConstants.MILLIS_IN_DAY;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.CURRENT_SCN_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
@@ -444,7 +445,7 @@ public abstract class BaseTest {
 boolean isDistributedCluster = isDistributedClusterModeEnabled(conf);
 if (!isDistributedCluster) {
 return initMiniCluster(conf, overrideProps);
-   } else {
+} else {
 return initClusterDistributedMode(conf, overrideProps);
 }
 }
@@ -629,6 +630,11 @@ public abstract class BaseTest {
 private static PhoenixTestDriver newTestDriver(ReadOnlyProps props) throws 
Exception {
 PhoenixTestDriver newDriver;
 String driverClassName = props.get(DRIVER_CLASS_NAME_ATTRIB);
+if(isDistributedClusterModeEnabled(config)) {
+HashMap distPropMap = new HashMap<>(1);
+distPropMap.put(DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+props = new ReadOnlyProps(props, 
distPropMap.entrySet().iterator());
+}
 if (driverClassName == null) {
 newDriver = new PhoenixTestDriver(props);
 } else {
@@ -767,14 +773,21 @@ public abstract class BaseTest {
 return "S" + Integer.toString(MAX_SEQ_SUFFIX_VALUE + 
nextName).substring(1);
 }
 
-public static void tearDownMiniClusterIfBeyondThreshold() throws Exception 
{
+public static void freeResourcesIfBeyondThreshold() throws Exception {
 if (TABLE_COUNTER.get() > TEARDOWN_THRESHOLD) {
 int numTables = TABLE_COUNTER.get();
 TABLE_COUNTER.set(0);
-logger.info(
-"Shutting down mini cluster because number of tables on this 
mini cluster is likely greater than "
-   

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

2019-06-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new ce7221a  PHOENIX-5311 Fix distributed cluster test resource (hbase 
table) leak
ce7221a is described below

commit ce7221aed1cbf90c4ebfe4e970f35f6a26e4f365
Author: István Tóth 
AuthorDate: Thu May 30 14:37:29 2019 +0200

PHOENIX-5311 Fix distributed cluster test resource (hbase table) leak

Change-Id: Ie325febbdf613198e2c2037760d5d6cdc79e997e
---
 .../phoenix/end2end/ParallelStatsDisabledIT.java   |  4 ++--
 .../phoenix/end2end/ParallelStatsEnabledIT.java|  4 ++--
 .../java/org/apache/phoenix/query/BaseTest.java| 25 --
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
index 8ea8dc8..a46de49 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
@@ -62,8 +62,8 @@ public abstract class ParallelStatsDisabledIT extends 
BaseTest {
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 
 protected ResultSet executeQuery(Connection conn, QueryBuilder 
queryBuilder) throws SQLException {
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
index 7028db3..a383ea1 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
@@ -48,7 +48,7 @@ public abstract class ParallelStatsEnabledIT extends BaseTest 
{
 }
 
 @AfterClass
-public static void tearDownMiniCluster() throws Exception {
-BaseTest.tearDownMiniClusterIfBeyondThreshold();
+public static void freeResources() throws Exception {
+BaseTest.freeResourcesIfBeyondThreshold();
 }
 }
diff --git a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java 
b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
index 305249b..26d2655 100644
--- a/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
+++ b/phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
@@ -19,6 +19,7 @@ package org.apache.phoenix.query;
 
 import static 
org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.NUM_CONCURRENT_INDEX_WRITER_THREADS_CONF_KEY;
 import static org.apache.phoenix.query.QueryConstants.MILLIS_IN_DAY;
+import static org.apache.phoenix.query.QueryServices.DROP_METADATA_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.CURRENT_SCN_ATTRIB;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL;
 import static org.apache.phoenix.util.PhoenixRuntime.JDBC_PROTOCOL_TERMINATOR;
@@ -444,7 +445,7 @@ public abstract class BaseTest {
 boolean isDistributedCluster = isDistributedClusterModeEnabled(conf);
 if (!isDistributedCluster) {
 return initMiniCluster(conf, overrideProps);
-   } else {
+} else {
 return initClusterDistributedMode(conf, overrideProps);
 }
 }
@@ -629,6 +630,11 @@ public abstract class BaseTest {
 private static PhoenixTestDriver newTestDriver(ReadOnlyProps props) throws 
Exception {
 PhoenixTestDriver newDriver;
 String driverClassName = props.get(DRIVER_CLASS_NAME_ATTRIB);
+if(isDistributedClusterModeEnabled(config)) {
+HashMap distPropMap = new HashMap<>(1);
+distPropMap.put(DROP_METADATA_ATTRIB, Boolean.TRUE.toString());
+props = new ReadOnlyProps(props, 
distPropMap.entrySet().iterator());
+}
 if (driverClassName == null) {
 newDriver = new PhoenixTestDriver(props);
 } else {
@@ -767,14 +773,21 @@ public abstract class BaseTest {
 return "S" + Integer.toString(MAX_SEQ_SUFFIX_VALUE + 
nextName).substring(1);
 }
 
-public static void tearDownMiniClusterIfBeyondThreshold() throws Exception 
{
+public static void freeResourcesIfBeyondThreshold() throws Exception {
 if (TABLE_COUNTER.get() > TEARDOWN_THRESHOLD) {
 int numTables = TABLE_COUNTER.get();
 TABLE_COUNTER.set(0);
-logger.info(
-"Shutting down mini cluster because number of tables on this 
mini cluster is likely greater than "
-   

[phoenix] branch master updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 493afe2  PHOENIX-5313: All mappers grab all RegionLocations from .META
493afe2 is described below

commit 493afe2c874e9e8f3fb52ab662ebcfa16f715ec2
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  6 +--
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 13 --
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT extends ParallelStatsDis

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 8e3b6da  PHOENIX-5313: All mappers grab all RegionLocations from .META
8e3b6da is described below

commit 8e3b6daf0165305c264ddb6c7cf7ef8457c8df85
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new cf8afd8  PHOENIX-5313: All mappers grab all RegionLocations from .META
cf8afd8 is described below

commit cf8afd8a733768ccbfc5b71d298c0fe15c826c35
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new edee198  PHOENIX-5313: All mappers grab all RegionLocations from .META
edee198 is described below

commit edee198824974a55e5dd31827a7749f8a98d937c
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new 704f938  PHOENIX-5313: All mappers grab all RegionLocations from .META
704f938 is described below

commit 704f9382036b6bfd48a555cfefa479f4b4f7f6cc
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5313: All mappers grab all RegionLocations from .META

2019-06-20 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new ddb40b1  PHOENIX-5313: All mappers grab all RegionLocations from .META
ddb40b1 is described below

commit ddb40b1608dc4975fd0dca98a2343062bca537c7
Author: Chinmay Kulkarni 
AuthorDate: Wed Jun 19 13:40:27 2019 -0700

PHOENIX-5313: All mappers grab all RegionLocations from .META
---
 .../org/apache/phoenix/end2end/MapReduceIT.java| 50 ++--
 .../iterate/MapReduceParallelScanGrouper.java  |  4 +-
 .../phoenix/mapreduce/PhoenixInputFormat.java  | 41 
 .../phoenix/mapreduce/PhoenixRecordReader.java | 12 +++--
 .../mapreduce/util/PhoenixMapReduceUtil.java   | 20 
 .../TestingMapReduceParallelScanGrouper.java   | 54 ++
 .../mapreduce/PhoenixTestingInputFormat.java   | 46 ++
 7 files changed, 201 insertions(+), 26 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
index fb24bb2..2460cd2 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/MapReduceIT.java
@@ -25,20 +25,30 @@ import org.apache.hadoop.mapreduce.Job;
 import org.apache.hadoop.mapreduce.Mapper;
 import org.apache.hadoop.mapreduce.Reducer;
 import org.apache.hadoop.mapreduce.lib.db.DBWritable;
+import org.apache.phoenix.iterate.TestingMapReduceParallelScanGrouper;
 import org.apache.phoenix.mapreduce.PhoenixOutputFormat;
+import org.apache.phoenix.mapreduce.PhoenixTestingInputFormat;
 import org.apache.phoenix.mapreduce.util.PhoenixConfigurationUtil;
 import org.apache.phoenix.mapreduce.util.PhoenixMapReduceUtil;
 import org.apache.phoenix.schema.types.PDouble;
 import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PhoenixRuntime;
+import org.junit.After;
 import org.junit.Before;
 import org.junit.Test;
 
 import java.io.IOException;
-import java.sql.*;
+import java.sql.Array;
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.ResultSet;
+import java.sql.SQLException;
 import java.util.Properties;
 
-import static org.junit.Assert.*;
+import static org.junit.Assert.assertEquals;
+import static org.junit.Assert.assertFalse;
+import static org.junit.Assert.assertTrue;
 
 /**
  * Test that our MapReduce basic tools work as expected
@@ -48,28 +58,37 @@ public class MapReduceIT extends ParallelStatsDisabledIT {
 private static final String STOCK_NAME = "STOCK_NAME";
 private static final String RECORDING_YEAR = "RECORDING_YEAR";
 private static final String RECORDINGS_QUARTER = "RECORDINGS_QUARTER";
-private  String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT EXISTS %s ( " +
+
+// We pre-split the table to ensure that we have multiple mappers.
+// This is used to test scenarios with more than 1 mapper
+private static final String CREATE_STOCK_TABLE = "CREATE TABLE IF NOT 
EXISTS %s ( " +
 " STOCK_NAME VARCHAR NOT NULL , RECORDING_YEAR  INTEGER NOT  NULL, 
 RECORDINGS_QUARTER " +
-" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR ))";
+" DOUBLE array[] CONSTRAINT pk PRIMARY KEY ( STOCK_NAME, 
RECORDING_YEAR )) "
++ "SPLIT ON ('AA')";
 
 private static final String CREATE_STOCK_VIEW = "CREATE VIEW IF NOT EXISTS 
%s (v1 VARCHAR) AS "
 + " SELECT * FROM %s WHERE RECORDING_YEAR = 2008";
 
 private static final String MAX_RECORDING = "MAX_RECORDING";
-private  String CREATE_STOCK_STATS_TABLE =
+private static final String CREATE_STOCK_STATS_TABLE =
 "CREATE TABLE IF NOT EXISTS %s(STOCK_NAME VARCHAR NOT NULL , "
 + " MAX_RECORDING DOUBLE CONSTRAINT pk PRIMARY KEY 
(STOCK_NAME ))";
 
 
-private String UPSERT = "UPSERT into %s values (?, ?, ?)";
+private static final String UPSERT = "UPSERT into %s values (?, ?, ?)";
 
-private String TENANT_ID = "1234567890";
+private static final String TENANT_ID = "1234567890";
 
 @Before
 public void setupTables() throws Exception {
 
 }
 
+@After
+public void clearCountersForScanGrouper() {
+
TestingMapReduceParallelScanGrouper.clearNumCallsToGetRegionBoundaries();
+}
+
 @Test
 public void testNoConditionsOnSelect() throws Exception {
 try (Connection conn = DriverManager.getConnection(getUrl())) {
@@ -93,7 +112,8 @@ public class MapReduceIT

[phoenix] branch master updated: PHOENIX-5374: Incorrect exception thrown in some cases when client does not have Exec permissions on SYSTEM:CATALOG

2019-06-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 33d6b34  PHOENIX-5374: Incorrect exception thrown in some cases when 
client does not have Exec permissions on SYSTEM:CATALOG
33d6b34 is described below

commit 33d6b3414078b1a429be056f271bfd0ac8c7f158
Author: Chinmay Kulkarni 
AuthorDate: Tue Jun 25 22:36:23 2019 -0700

PHOENIX-5374: Incorrect exception thrown in some cases when client does not 
have Exec permissions on SYSTEM:CATALOG
---
 .../phoenix/end2end/PermissionNSEnabledIT.java | 49 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java | 10 +++--
 2 files changed, 56 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
index 22fc297..36fdafc 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
@@ -17,13 +17,23 @@
  */
 package org.apache.phoenix.end2end;
 
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.security.AccessDeniedException;
 import org.apache.hadoop.hbase.security.access.AccessControlClient;
 import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.SQLException;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class PermissionNSEnabledIT extends BasePermissionsIT {
 
@@ -67,4 +77,43 @@ public class PermissionNSEnabledIT extends BasePermissionsIT 
{
 revokeAll();
 }
 }
+
+@Test
+public void testConnectionCreationFailsWhenNoExecPermsOnSystemCatalog() 
throws Throwable {
+try {
+grantSystemTableAccess();
+superUser1.runAs((PrivilegedExceptionAction) () -> {
+TableName systemCatalogTableName =
+TableName.valueOf(SchemaUtil.getPhysicalHBaseTableName(
+SYSTEM_SCHEMA_NAME, SYSTEM_CATALOG_TABLE, 
true).getString());
+try {
+// Revoke Exec permissions for SYSTEM CATALOG for the 
unprivileged user
+AccessControlClient.revoke(getUtility().getConnection(), 
systemCatalogTableName,
+unprivilegedUser.getShortName(), null, null, 
Permission.Action.EXEC);
+} catch (Throwable t) {
+if (t instanceof Exception) {
+throw (Exception)t;
+} else {
+throw new Exception(t);
+}
+}
+return null;
+});
+unprivilegedUser.runAs((PrivilegedExceptionAction) () -> {
+try (Connection ignored = getConnection()) {
+// We expect this to throw a wrapped AccessDeniedException.
+fail("Should have failed with a wrapped 
AccessDeniedException");
+} catch (Throwable ex) {
+assertTrue("Should not get an incompatible jars exception",
+ex instanceof SQLException && 
((SQLException)ex).getErrorCode() !=
+
SQLExceptionCode.INCOMPATIBLE_CLIENT_SERVER_JAR.getErrorCode());
+assertTrue("Expected a wrapped AccessDeniedException",
+ex.getCause() instanceof AccessDeniedException);
+}
+return null;
+});
+} finally {
+revokeAll();
+}
+}
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index d5d3d34..e2eb079 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -1365,8 +1365,12 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 return MetaDataUtil.areClientAndServerCompatible(serverVersion);
 }
 
-private void checkClientServerCompatibility(byte[] metaTable) throws 
SQLException {
-StringBuilder buf = ne

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5374: Incorrect exception thrown in some cases when client does not have Exec permissions on SYSTEM:CATALOG

2019-06-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new bd8e735  PHOENIX-5374: Incorrect exception thrown in some cases when 
client does not have Exec permissions on SYSTEM:CATALOG
bd8e735 is described below

commit bd8e7359d1585c73ec6bc1c2ec719cf98d25ffb2
Author: Chinmay Kulkarni 
AuthorDate: Tue Jun 25 22:36:23 2019 -0700

PHOENIX-5374: Incorrect exception thrown in some cases when client does not 
have Exec permissions on SYSTEM:CATALOG
---
 .../phoenix/end2end/PermissionNSEnabledIT.java | 57 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java | 10 ++--
 2 files changed, 64 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
index 22fc297..30f3a08 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
@@ -17,13 +17,23 @@
  */
 package org.apache.phoenix.end2end;
 
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.security.AccessDeniedException;
 import org.apache.hadoop.hbase.security.access.AccessControlClient;
 import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.SQLException;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class PermissionNSEnabledIT extends BasePermissionsIT {
 
@@ -67,4 +77,51 @@ public class PermissionNSEnabledIT extends BasePermissionsIT 
{
 revokeAll();
 }
 }
+
+@Test
+public void testConnectionCreationFailsWhenNoExecPermsOnSystemCatalog() 
throws Throwable {
+try {
+grantSystemTableAccess();
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+TableName systemCatalogTableName =
+
TableName.valueOf(SchemaUtil.getPhysicalHBaseTableName(SYSTEM_SCHEMA_NAME,
+SYSTEM_CATALOG_TABLE, true).getString());
+try {
+// Revoke Exec permissions for SYSTEM CATALOG for the 
unprivileged user
+AccessControlClient
+.revoke(getUtility().getConnection(), 
systemCatalogTableName,
+unprivilegedUser.getShortName(), null, 
null, Permission.Action.EXEC);
+} catch (Throwable t) {
+if (t instanceof Exception) {
+throw (Exception) t;
+} else {
+throw new Exception(t);
+}
+}
+return null;
+}
+});
+
+unprivilegedUser.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try (Connection ignored = getConnection()) {
+// We expect this to throw a wrapped 
AccessDeniedException.
+fail("Should have failed with a wrapped 
AccessDeniedException");
+} catch (Throwable ex) {
+assertTrue("Should not get an incompatible jars 
exception",
+ex instanceof SQLException && 
((SQLException)ex).getErrorCode() !=
+
SQLExceptionCode.INCOMPATIBLE_CLIENT_SERVER_JAR.getErrorCode());
+assertTrue("Expected a wrapped AccessDeniedException",
+ex.getCause() instanceof 
AccessDeniedException);
+}
+return null;
+}
+});
+} finally {
+revokeAll();
+}
+}
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index bd2b975..3a6 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5374: Incorrect exception thrown in some cases when client does not have Exec permissions on SYSTEM:CATALOG

2019-06-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 2e45dce  PHOENIX-5374: Incorrect exception thrown in some cases when 
client does not have Exec permissions on SYSTEM:CATALOG
2e45dce is described below

commit 2e45dce56e3d9236b537ac3bb11f303804e496fb
Author: Chinmay Kulkarni 
AuthorDate: Tue Jun 25 22:36:23 2019 -0700

PHOENIX-5374: Incorrect exception thrown in some cases when client does not 
have Exec permissions on SYSTEM:CATALOG
---
 .../phoenix/end2end/PermissionNSEnabledIT.java | 57 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java | 10 ++--
 2 files changed, 64 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
index 22fc297..30f3a08 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
@@ -17,13 +17,23 @@
  */
 package org.apache.phoenix.end2end;
 
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.security.AccessDeniedException;
 import org.apache.hadoop.hbase.security.access.AccessControlClient;
 import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.SQLException;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class PermissionNSEnabledIT extends BasePermissionsIT {
 
@@ -67,4 +77,51 @@ public class PermissionNSEnabledIT extends BasePermissionsIT 
{
 revokeAll();
 }
 }
+
+@Test
+public void testConnectionCreationFailsWhenNoExecPermsOnSystemCatalog() 
throws Throwable {
+try {
+grantSystemTableAccess();
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+TableName systemCatalogTableName =
+
TableName.valueOf(SchemaUtil.getPhysicalHBaseTableName(SYSTEM_SCHEMA_NAME,
+SYSTEM_CATALOG_TABLE, true).getString());
+try {
+// Revoke Exec permissions for SYSTEM CATALOG for the 
unprivileged user
+AccessControlClient
+.revoke(getUtility().getConnection(), 
systemCatalogTableName,
+unprivilegedUser.getShortName(), null, 
null, Permission.Action.EXEC);
+} catch (Throwable t) {
+if (t instanceof Exception) {
+throw (Exception) t;
+} else {
+throw new Exception(t);
+}
+}
+return null;
+}
+});
+
+unprivilegedUser.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try (Connection ignored = getConnection()) {
+// We expect this to throw a wrapped 
AccessDeniedException.
+fail("Should have failed with a wrapped 
AccessDeniedException");
+} catch (Throwable ex) {
+assertTrue("Should not get an incompatible jars 
exception",
+ex instanceof SQLException && 
((SQLException)ex).getErrorCode() !=
+
SQLExceptionCode.INCOMPATIBLE_CLIENT_SERVER_JAR.getErrorCode());
+assertTrue("Expected a wrapped AccessDeniedException",
+ex.getCause() instanceof 
AccessDeniedException);
+}
+return null;
+}
+});
+} finally {
+revokeAll();
+}
+}
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index bd2b975..3a6 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5374: Incorrect exception thrown in some cases when client does not have Exec permissions on SYSTEM:CATALOG

2019-06-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 2d8096f  PHOENIX-5374: Incorrect exception thrown in some cases when 
client does not have Exec permissions on SYSTEM:CATALOG
2d8096f is described below

commit 2d8096ffb30d6cd0fbc8c6355d1c1617e8f0f9ac
Author: Chinmay Kulkarni 
AuthorDate: Tue Jun 25 22:36:23 2019 -0700

PHOENIX-5374: Incorrect exception thrown in some cases when client does not 
have Exec permissions on SYSTEM:CATALOG
---
 .../phoenix/end2end/PermissionNSEnabledIT.java | 57 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java | 10 ++--
 2 files changed, 64 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
index 22fc297..30f3a08 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/PermissionNSEnabledIT.java
@@ -17,13 +17,23 @@
  */
 package org.apache.phoenix.end2end;
 
+import org.apache.hadoop.hbase.TableName;
 import org.apache.hadoop.hbase.security.AccessDeniedException;
 import org.apache.hadoop.hbase.security.access.AccessControlClient;
 import org.apache.hadoop.hbase.security.access.Permission;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.util.SchemaUtil;
 import org.junit.BeforeClass;
 import org.junit.Test;
 
 import java.security.PrivilegedExceptionAction;
+import java.sql.Connection;
+import java.sql.SQLException;
+
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_TABLE;
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME;
+import static org.junit.Assert.assertTrue;
+import static org.junit.Assert.fail;
 
 public class PermissionNSEnabledIT extends BasePermissionsIT {
 
@@ -67,4 +77,51 @@ public class PermissionNSEnabledIT extends BasePermissionsIT 
{
 revokeAll();
 }
 }
+
+@Test
+public void testConnectionCreationFailsWhenNoExecPermsOnSystemCatalog() 
throws Throwable {
+try {
+grantSystemTableAccess();
+superUser1.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+TableName systemCatalogTableName =
+
TableName.valueOf(SchemaUtil.getPhysicalHBaseTableName(SYSTEM_SCHEMA_NAME,
+SYSTEM_CATALOG_TABLE, true).getString());
+try {
+// Revoke Exec permissions for SYSTEM CATALOG for the 
unprivileged user
+AccessControlClient
+.revoke(getUtility().getConnection(), 
systemCatalogTableName,
+unprivilegedUser.getShortName(), null, 
null, Permission.Action.EXEC);
+} catch (Throwable t) {
+if (t instanceof Exception) {
+throw (Exception) t;
+} else {
+throw new Exception(t);
+}
+}
+return null;
+}
+});
+
+unprivilegedUser.runAs(new PrivilegedExceptionAction() {
+@Override
+public Void run() throws Exception {
+try (Connection ignored = getConnection()) {
+// We expect this to throw a wrapped 
AccessDeniedException.
+fail("Should have failed with a wrapped 
AccessDeniedException");
+} catch (Throwable ex) {
+assertTrue("Should not get an incompatible jars 
exception",
+ex instanceof SQLException && 
((SQLException)ex).getErrorCode() !=
+
SQLExceptionCode.INCOMPATIBLE_CLIENT_SERVER_JAR.getErrorCode());
+assertTrue("Expected a wrapped AccessDeniedException",
+ex.getCause() instanceof 
AccessDeniedException);
+}
+return null;
+}
+});
+} finally {
+revokeAll();
+}
+}
 }
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index bd2b975..3a6 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/

[phoenix] branch master updated: PHOENIX-5386: Disallow creating views on top of SYSTEM tables

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 5425298  PHOENIX-5386: Disallow creating views on top of SYSTEM tables
5425298 is described below

commit 5425298e66e231bed55b700c4e5724c330944a6d
Author: Chinmay Kulkarni 
AuthorDate: Sat Jul 6 01:00:36 2019 -0700

PHOENIX-5386: Disallow creating views on top of SYSTEM tables
---
 .../it/java/org/apache/phoenix/end2end/ViewIT.java   | 20 +---
 .../apache/phoenix/compile/CreateTableCompiler.java  |  6 ++
 .../apache/phoenix/exception/SQLExceptionCode.java   |  3 +++
 3 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index b9ef3ac..a65440b 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -55,10 +55,7 @@ import java.util.concurrent.TimeUnit;
 
 import org.apache.curator.shaded.com.google.common.collect.Lists;
 import org.apache.hadoop.hbase.DoNotRetryIOException;
-import org.apache.hadoop.hbase.HBaseTestingUtility;
-import org.apache.hadoop.hbase.MiniHBaseCluster;
 import org.apache.hadoop.hbase.TableName;
-import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.Scan;
 import org.apache.hadoop.hbase.coprocessor.ObserverContext;
 import org.apache.hadoop.hbase.util.Pair;
@@ -68,6 +65,7 @@ import 
org.apache.phoenix.coprocessor.MetaDataEndpointObserver;
 import org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost;
 import 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment;
 import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
@@ -969,6 +967,22 @@ public class ViewIT extends SplitSystemCatalogIT {
 }
 }
 
+@Test
+public void testDisallowCreatingViewsOnSystemTable() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String viewDDL = "CREATE VIEW " + generateUniqueName() + " AS 
SELECT * FROM " +
+"SYSTEM.CATALOG";
+try {
+conn.createStatement().execute(viewDDL);
+fail("Should have thrown an exception");
+} catch (SQLException sqlE) {
+assertEquals("Expected a different Error code",
+
SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES.getErrorCode(),
+sqlE.getErrorCode());
+}
+}
+}
+
 private class CreateViewRunnable implements Callable {
 private final String fullTableName;
 private final String fullViewName;
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index 6329467..776859e 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -125,6 +125,12 @@ public class CreateTableCompiler {
 // Used to track column references in a view
 ExpressionCompiler expressionCompiler = new 
ColumnTrackingExpressionCompiler(context, isViewColumnReferencedToBe);
 parentToBe = tableRef.getTable();
+// Disallow creating views on top of SYSTEM tables. See 
PHOENIX-5386
+if (parentToBe.getType() == PTableType.SYSTEM) {
+throw new SQLExceptionInfo
+
.Builder(SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES)
+.build().buildException();
+}
 viewTypeToBe = parentToBe.getViewType() == ViewType.MAPPED ? 
ViewType.MAPPED : ViewType.UPDATABLE;
 if (whereNode == null) {
 viewStatementToBe = parentToBe.getViewStatement();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
index b773649..fa9625c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
@@ -35,6 +35,7 @@ import 
org.apache.phoenix.schema.ConcurrentTableMutationException;
 import org.apache.phoenix.schema.FunctionAlreadyExistsException;
 import org.apache.phoenix.schema.FunctionNotFoundException;
 impor

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5386: Disallow creating views on top of SYSTEM tables

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new efd6a52  PHOENIX-5386: Disallow creating views on top of SYSTEM tables
efd6a52 is described below

commit efd6a52a4fc049b5b964be4dac6d6a1eb015de49
Author: Chinmay Kulkarni 
AuthorDate: Mon Jul 8 13:44:15 2019 -0700

PHOENIX-5386: Disallow creating views on top of SYSTEM tables
---
 .../src/it/java/org/apache/phoenix/end2end/ViewIT.java  | 17 +
 .../org/apache/phoenix/compile/CreateTableCompiler.java |  6 ++
 .../org/apache/phoenix/exception/SQLExceptionCode.java  |  3 +++
 3 files changed, 26 insertions(+)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index f2f7834..9ab7bb3 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -63,6 +63,7 @@ import 
org.apache.phoenix.coprocessor.BaseMetaDataEndpointObserver;
 import org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost;
 import 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment;
 import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
@@ -956,6 +957,22 @@ public class ViewIT extends SplitSystemCatalogIT {
 }
 }
 
+@Test
+public void testDisallowCreatingViewsOnSystemTable() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String viewDDL = "CREATE VIEW " + generateUniqueName() + " AS 
SELECT * FROM " +
+"SYSTEM.CATALOG";
+try {
+conn.createStatement().execute(viewDDL);
+fail("Should have thrown an exception");
+} catch (SQLException sqlE) {
+assertEquals("Expected a different Error code",
+
SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES.getErrorCode(),
+sqlE.getErrorCode());
+}
+}
+}
+
 private class CreateViewRunnable implements Callable {
 private final String fullTableName;
 private final String fullViewName;
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index 39d8d18..e806d9c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -125,6 +125,12 @@ public class CreateTableCompiler {
 // Used to track column references in a view
 ExpressionCompiler expressionCompiler = new 
ColumnTrackingExpressionCompiler(context, isViewColumnReferencedToBe);
 parentToBe = tableRef.getTable();
+// Disallow creating views on top of SYSTEM tables. See 
PHOENIX-5386
+if (parentToBe.getType() == PTableType.SYSTEM) {
+throw new SQLExceptionInfo
+
.Builder(SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES)
+.build().buildException();
+}
 viewTypeToBe = parentToBe.getViewType() == ViewType.MAPPED ? 
ViewType.MAPPED : ViewType.UPDATABLE;
 if (whereNode == null) {
 viewStatementToBe = parentToBe.getViewStatement();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
index f88de6b..07e9a9d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
@@ -35,6 +35,7 @@ import 
org.apache.phoenix.schema.ConcurrentTableMutationException;
 import org.apache.phoenix.schema.FunctionAlreadyExistsException;
 import org.apache.phoenix.schema.FunctionNotFoundException;
 import org.apache.phoenix.schema.IndexNotFoundException;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaNotFoundException;
@@ -406,6 +407,8 @@ public enum SQLExceptionCode {
 INVALID_IMMUTABLE_STORAGE_SCHEME_AND_COLUMN_QUALIFIER_BYTES(1137, "XCL37", 
"If IMMUTABLE_STORAGE_SCHEME property is not set to ONE_CELL_PER_COLUMN 
COLUMN_ENCODED_

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5386: Disallow creating views on top of SYSTEM tables

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new d031442  PHOENIX-5386: Disallow creating views on top of SYSTEM tables
d031442 is described below

commit d031442f4b9152916f238a35e686322de7ffd08e
Author: Chinmay Kulkarni 
AuthorDate: Mon Jul 8 13:44:15 2019 -0700

PHOENIX-5386: Disallow creating views on top of SYSTEM tables
---
 .../src/it/java/org/apache/phoenix/end2end/ViewIT.java  | 17 +
 .../org/apache/phoenix/compile/CreateTableCompiler.java |  6 ++
 .../org/apache/phoenix/exception/SQLExceptionCode.java  |  3 +++
 3 files changed, 26 insertions(+)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index f2f7834..9ab7bb3 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -63,6 +63,7 @@ import 
org.apache.phoenix.coprocessor.BaseMetaDataEndpointObserver;
 import org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost;
 import 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment;
 import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
@@ -956,6 +957,22 @@ public class ViewIT extends SplitSystemCatalogIT {
 }
 }
 
+@Test
+public void testDisallowCreatingViewsOnSystemTable() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String viewDDL = "CREATE VIEW " + generateUniqueName() + " AS 
SELECT * FROM " +
+"SYSTEM.CATALOG";
+try {
+conn.createStatement().execute(viewDDL);
+fail("Should have thrown an exception");
+} catch (SQLException sqlE) {
+assertEquals("Expected a different Error code",
+
SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES.getErrorCode(),
+sqlE.getErrorCode());
+}
+}
+}
+
 private class CreateViewRunnable implements Callable {
 private final String fullTableName;
 private final String fullViewName;
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index 39d8d18..e806d9c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -125,6 +125,12 @@ public class CreateTableCompiler {
 // Used to track column references in a view
 ExpressionCompiler expressionCompiler = new 
ColumnTrackingExpressionCompiler(context, isViewColumnReferencedToBe);
 parentToBe = tableRef.getTable();
+// Disallow creating views on top of SYSTEM tables. See 
PHOENIX-5386
+if (parentToBe.getType() == PTableType.SYSTEM) {
+throw new SQLExceptionInfo
+
.Builder(SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES)
+.build().buildException();
+}
 viewTypeToBe = parentToBe.getViewType() == ViewType.MAPPED ? 
ViewType.MAPPED : ViewType.UPDATABLE;
 if (whereNode == null) {
 viewStatementToBe = parentToBe.getViewStatement();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
index 9a6985c..4c59fc6 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
@@ -35,6 +35,7 @@ import 
org.apache.phoenix.schema.ConcurrentTableMutationException;
 import org.apache.phoenix.schema.FunctionAlreadyExistsException;
 import org.apache.phoenix.schema.FunctionNotFoundException;
 import org.apache.phoenix.schema.IndexNotFoundException;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaNotFoundException;
@@ -406,6 +407,8 @@ public enum SQLExceptionCode {
 INVALID_IMMUTABLE_STORAGE_SCHEME_AND_COLUMN_QUALIFIER_BYTES(1137, "XCL37", 
"If IMMUTABLE_STORAGE_SCHEME property is not set to ONE_CELL_PER_COLUMN 
COLUMN_ENCODED_

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5386: Disallow creating views on top of SYSTEM tables

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 6ef6d2a  PHOENIX-5386: Disallow creating views on top of SYSTEM tables
6ef6d2a is described below

commit 6ef6d2a2f80e381dd7d1d7a2d93eca43df48931f
Author: Chinmay Kulkarni 
AuthorDate: Mon Jul 8 13:44:15 2019 -0700

PHOENIX-5386: Disallow creating views on top of SYSTEM tables
---
 .../src/it/java/org/apache/phoenix/end2end/ViewIT.java  | 17 +
 .../org/apache/phoenix/compile/CreateTableCompiler.java |  6 ++
 .../org/apache/phoenix/exception/SQLExceptionCode.java  |  3 +++
 3 files changed, 26 insertions(+)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
index f2f7834..9ab7bb3 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ViewIT.java
@@ -63,6 +63,7 @@ import 
org.apache.phoenix.coprocessor.BaseMetaDataEndpointObserver;
 import org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost;
 import 
org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.PhoenixMetaDataControllerEnvironment;
 import org.apache.phoenix.exception.PhoenixIOException;
+import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixStatement;
 import org.apache.phoenix.query.KeyRange;
 import org.apache.phoenix.query.QueryServices;
@@ -956,6 +957,22 @@ public class ViewIT extends SplitSystemCatalogIT {
 }
 }
 
+@Test
+public void testDisallowCreatingViewsOnSystemTable() throws SQLException {
+try (Connection conn = DriverManager.getConnection(getUrl())) {
+String viewDDL = "CREATE VIEW " + generateUniqueName() + " AS 
SELECT * FROM " +
+"SYSTEM.CATALOG";
+try {
+conn.createStatement().execute(viewDDL);
+fail("Should have thrown an exception");
+} catch (SQLException sqlE) {
+assertEquals("Expected a different Error code",
+
SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES.getErrorCode(),
+sqlE.getErrorCode());
+}
+}
+}
+
 private class CreateViewRunnable implements Callable {
 private final String fullTableName;
 private final String fullViewName;
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
index 39d8d18..e806d9c 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/CreateTableCompiler.java
@@ -125,6 +125,12 @@ public class CreateTableCompiler {
 // Used to track column references in a view
 ExpressionCompiler expressionCompiler = new 
ColumnTrackingExpressionCompiler(context, isViewColumnReferencedToBe);
 parentToBe = tableRef.getTable();
+// Disallow creating views on top of SYSTEM tables. See 
PHOENIX-5386
+if (parentToBe.getType() == PTableType.SYSTEM) {
+throw new SQLExceptionInfo
+
.Builder(SQLExceptionCode.CANNOT_CREATE_VIEWS_ON_SYSTEM_TABLES)
+.build().buildException();
+}
 viewTypeToBe = parentToBe.getViewType() == ViewType.MAPPED ? 
ViewType.MAPPED : ViewType.UPDATABLE;
 if (whereNode == null) {
 viewStatementToBe = parentToBe.getViewStatement();
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
index 9a6985c..4c59fc6 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java
@@ -35,6 +35,7 @@ import 
org.apache.phoenix.schema.ConcurrentTableMutationException;
 import org.apache.phoenix.schema.FunctionAlreadyExistsException;
 import org.apache.phoenix.schema.FunctionNotFoundException;
 import org.apache.phoenix.schema.IndexNotFoundException;
+import org.apache.phoenix.schema.PTableType;
 import org.apache.phoenix.schema.ReadOnlyTableException;
 import org.apache.phoenix.schema.SchemaAlreadyExistsException;
 import org.apache.phoenix.schema.SchemaNotFoundException;
@@ -406,6 +407,8 @@ public enum SQLExceptionCode {
 INVALID_IMMUTABLE_STORAGE_SCHEME_AND_COLUMN_QUALIFIER_BYTES(1137, "XCL37", 
"If IMMUTABLE_STORAGE_SCHEME property is not set to ONE_CELL_PER_COLUMN 
COLUMN_ENCODED_

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 7e1300e  PHOENIX-5103: Can't create/drop table using 4.14 client 
against 4.15 server
7e1300e is described below

commit 7e1300e2bc2d6ff272429d014fb79955d80ac369
Author: Chinmay Kulkarni 
AuthorDate: Sat Jul 6 00:14:10 2019 -0700

PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |  12 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java | 343 +
 .../java/org/apache/phoenix/util/UpgradeUtil.java  |  10 +-
 3 files changed, 230 insertions(+), 135 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 37f0c5e..a059b54 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2075,7 +2075,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 table = loadTable(env, tableKey, cacheKey, clientTimeStamp, 
HConstants.LATEST_TIMESTAMP,
 clientVersion);
 } catch (ParentTableNotFoundException e) {
-dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+if (clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
+dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+}
 }
 if (table != null) {
 if (table.getTimeStamp() < clientTimeStamp) {
@@ -2098,7 +2100,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 // check if the table was dropped, but had child views that were 
have not yet been cleaned up
-if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
+// We don't need to do this for older clients
+if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME) &&
+clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
 dropChildViews(env, tenantIdBytes, schemaName, tableName);
 }
 
@@ -3443,7 +3447,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 getParentPhysicalTableName(table),type);
 
 List additionalTableMetadataMutations = 
Lists.newArrayListWithExpectedSize(2);
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
@@ -3833,7 +3837,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 List additionalTableMetaData = 
Lists.newArrayList();
 PTableType type = table.getType();
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index ad22ad5..04034ca 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,6 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static org.apache.hadoop.hbase.HColumnDescriptor.REPLICATION_SCOPE;
 import static org.apache.hadoop.hbase.HColumnDescriptor.KEEP_DELETED_CELLS;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOEN

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 4992d8f  PHOENIX-5103: Can't create/drop table using 4.14 client 
against 4.15 server
4992d8f is described below

commit 4992d8fd98fe2e0afe3155100170c855712852d8
Author: Chinmay Kulkarni 
AuthorDate: Sat Jul 6 00:14:10 2019 -0700

PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |  12 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java | 343 +
 .../java/org/apache/phoenix/util/UpgradeUtil.java  |  10 +-
 3 files changed, 230 insertions(+), 135 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 37f0c5e..a059b54 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2075,7 +2075,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 table = loadTable(env, tableKey, cacheKey, clientTimeStamp, 
HConstants.LATEST_TIMESTAMP,
 clientVersion);
 } catch (ParentTableNotFoundException e) {
-dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+if (clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
+dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+}
 }
 if (table != null) {
 if (table.getTimeStamp() < clientTimeStamp) {
@@ -2098,7 +2100,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 // check if the table was dropped, but had child views that were 
have not yet been cleaned up
-if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
+// We don't need to do this for older clients
+if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME) &&
+clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
 dropChildViews(env, tenantIdBytes, schemaName, tableName);
 }
 
@@ -3443,7 +3447,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 getParentPhysicalTableName(table),type);
 
 List additionalTableMetadataMutations = 
Lists.newArrayListWithExpectedSize(2);
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
@@ -3833,7 +3837,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 List additionalTableMetaData = 
Lists.newArrayList();
 PTableType type = table.getType();
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index ad22ad5..04034ca 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,6 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static org.apache.hadoop.hbase.HColumnDescriptor.REPLICATION_SCOPE;
 import static org.apache.hadoop.hbase.HColumnDescriptor.KEEP_DELETED_CELLS;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOEN

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 723c45d  PHOENIX-5103: Can't create/drop table using 4.14 client 
against 4.15 server
723c45d is described below

commit 723c45dd2eb4d98115901936fe2ca5ca4a3be8cc
Author: Chinmay Kulkarni 
AuthorDate: Sat Jul 6 00:14:10 2019 -0700

PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |  12 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java | 343 +
 .../java/org/apache/phoenix/util/UpgradeUtil.java  |  10 +-
 3 files changed, 230 insertions(+), 135 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 37f0c5e..a059b54 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2075,7 +2075,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 table = loadTable(env, tableKey, cacheKey, clientTimeStamp, 
HConstants.LATEST_TIMESTAMP,
 clientVersion);
 } catch (ParentTableNotFoundException e) {
-dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+if (clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
+dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+}
 }
 if (table != null) {
 if (table.getTimeStamp() < clientTimeStamp) {
@@ -2098,7 +2100,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 }
 
 // check if the table was dropped, but had child views that were 
have not yet been cleaned up
-if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
+// We don't need to do this for older clients
+if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME) &&
+clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
 dropChildViews(env, tenantIdBytes, schemaName, tableName);
 }
 
@@ -3443,7 +3447,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 getParentPhysicalTableName(table),type);
 
 List additionalTableMetadataMutations = 
Lists.newArrayListWithExpectedSize(2);
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
@@ -3833,7 +3837,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 
 List additionalTableMetaData = 
Lists.newArrayList();
 PTableType type = table.getType();
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index ad22ad5..04034ca 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -21,6 +21,7 @@ import static org.apache.hadoop.hbase.HColumnDescriptor.TTL;
 import static org.apache.hadoop.hbase.HColumnDescriptor.REPLICATION_SCOPE;
 import static org.apache.hadoop.hbase.HColumnDescriptor.KEEP_DELETED_CELLS;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOEN

[phoenix] branch master updated: PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server

2019-07-08 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3fea4c3  PHOENIX-5103: Can't create/drop table using 4.14 client 
against 4.15 server
3fea4c3 is described below

commit 3fea4c3ebc69edaff3de19db0fcdb392a90360f4
Author: Chinmay Kulkarni 
AuthorDate: Sat Jul 6 00:14:10 2019 -0700

PHOENIX-5103: Can't create/drop table using 4.14 client against 4.15 server
---
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |  12 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java | 344 +
 .../java/org/apache/phoenix/util/UpgradeUtil.java  |  10 +-
 3 files changed, 231 insertions(+), 135 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index cf8217a..68cdcfe 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -2084,7 +2084,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 table = loadTable(env, tableKey, cacheKey, clientTimeStamp, 
HConstants.LATEST_TIMESTAMP,
 clientVersion);
 } catch (ParentTableNotFoundException e) {
-dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+if (clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
+dropChildViews(env, e.getParentTenantId(), 
e.getParentSchemaName(), e.getParentTableName());
+}
 }
 if (table != null) {
 if (table.getTimeStamp() < clientTimeStamp) {
@@ -2107,7 +2109,9 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 }
 
 // check if the table was dropped, but had child views that were 
have not yet been cleaned up
-if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME)) {
+// We don't need to do this for older clients
+if 
(!Bytes.toString(schemaName).equals(QueryConstants.SYSTEM_SCHEMA_NAME) &&
+clientVersion >= MIN_SPLITTABLE_SYSTEM_CATALOG) {
 dropChildViews(env, tenantIdBytes, schemaName, tableName);
 }
 
@@ -3463,7 +3467,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 getParentPhysicalTableName(table),type);
 
 List additionalTableMetadataMutations = 
Lists.newArrayListWithExpectedSize(2);
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
@@ -3855,7 +3859,7 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 
 List additionalTableMetaData = 
Lists.newArrayList();
 PTableType type = table.getType();
-if (type == PTableType.TABLE || type == PTableType.SYSTEM) 
{
+if (type == PTableType.TABLE) {
 TableViewFinderResult childViewsResult = new 
TableViewFinderResult();
 findAllChildViews(tenantId, 
table.getSchemaName().getBytes(), table.getTableName().getBytes(), 
childViewsResult);
 if (childViewsResult.hasLinks()) {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index a45448f..efbed64 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -22,6 +22,7 @@ import static 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.TTL;
 import static 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.REPLICATION_SCOPE;
 import static 
org.apache.hadoop.hbase.client.ColumnFamilyDescriptorBuilder.KEEP_DELETED_CELLS;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP;
+import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_15_0;
 import static 
org.apache.phoenix.coprocessor.MetaDataProtocol.PHOEN

[phoenix] branch master updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-09 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 3b31923  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
3b31923 is described below

commit 3b31923d4c698ee2a38f5b9db104892a194e2131
Author: Viraj Jasani 
AuthorDate: Sat Jun 29 23:41:22 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 9f7bc8e..943c0d4 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -734,10 +734,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index 68cdcfe..7dca023 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -749,13 +749,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements RegionCopr
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index 567a642..74cc3dd 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), cellList);
-for (Cell keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValueComparator());
 for (Cell kv 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 5bcb8c3  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
5bcb8c3 is described below

commit 5bcb8c3fea8080e81b558a0c2a90c9675877957c
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:20:21 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 9ed206e..3bc15fd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -734,10 +734,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValu

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new dcae102  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
dcae102 is described below

commit dcae102aa56a009663ea0dcb9ba86f84052a46ab
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:28:06 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 3e249ac..2ced6a6 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -738,10 +738,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValu

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5382 : Improved performace with Bulk operations over iterations

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 4a6b48c  PHOENIX-5382 : Improved performace with Bulk operations over 
iterations
4a6b48c is described below

commit 4a6b48c2b46b60ecade32bad6823d77ff9ca8112
Author: Viraj Jasani 
AuthorDate: Wed Jul 10 16:34:33 2019 +0530

PHOENIX-5382 : Improved performace with Bulk operations over iterations

Signed-off-by: Chinmay Kulkarni 
---
 .../main/java/org/apache/phoenix/compile/FromCompiler.java|  5 +++--
 .../org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java  |  7 +++
 .../phoenix/mapreduce/index/PhoenixIndexImportMapper.java |  6 ++
 .../org/apache/phoenix/query/ConnectionQueryServicesImpl.java |  5 +++--
 .../main/java/org/apache/phoenix/util/CSVCommonsLoader.java   | 11 ++-
 .../src/main/java/org/apache/phoenix/util/Closeables.java |  5 ++---
 .../src/main/java/org/apache/phoenix/util/SQLCloseables.java  | 11 ++-
 7 files changed, 21 insertions(+), 29 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
index 9ed206e..3bc15fd 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/FromCompiler.java
@@ -734,10 +734,11 @@ public class FromCompiler {
 protected PTable addDynamicColumns(List dynColumns, PTable 
theTable)
 throws SQLException {
 if (!dynColumns.isEmpty()) {
-List allcolumns = new ArrayList();
 List existingColumns = theTable.getColumns();
 // Need to skip the salting column, as it's handled in the 
PTable builder call below
-allcolumns.addAll(theTable.getBucketNum() == null ? 
existingColumns : existingColumns.subList(1, existingColumns.size()));
+List allcolumns = new ArrayList<>(
+theTable.getBucketNum() == null ? existingColumns :
+existingColumns.subList(1, 
existingColumns.size()));
 // Position still based on with the salting columns
 int position = existingColumns.size();
 PName defaultFamilyName = 
PNameFactory.newName(SchemaUtil.getEmptyColumnFamily(theTable));
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
index a059b54..cc24511 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java
@@ -744,13 +744,12 @@ public class MetaDataEndpointImpl extends 
MetaDataProtocol implements Coprocesso
 findAncestorViewsOfIndex(tenantId, schemaName, tableName, 
viewFinderResult,
 table.isNamespaceMapped());
 }
-if (viewFinderResult.getLinks().isEmpty()) {
+List tableViewInfoList = viewFinderResult.getLinks();
+if (tableViewInfoList.isEmpty()) {
 // no need to combine columns for local indexes on regular tables
 return table;
 }
-for (TableInfo viewInfo : viewFinderResult.getLinks()) {
-ancestorList.add(viewInfo);
-}
+ancestorList.addAll(tableViewInfoList);
 List allColumns = Lists.newArrayList();
 List excludedColumns = Lists.newArrayList();
 // add my own columns first in reverse order
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
index b1a14b4..14ffe73 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
@@ -140,13 +140,11 @@ public class PhoenixIndexImportMapper extends 
Mapper cellList : 
mutation.getFamilyCellMap().values()) {
 ListkeyValueList = 
preUpdateProcessor.preUpsert(mutation.getRow(), 
KeyValueUtil.ensureKeyValues(cellList));
-for (KeyValue keyValue : keyValueList) {
-keyValues.add(keyValue);
-}
+keyValues.addAll(keyValueList);
 }
 }
 }
-Collections.sort(keyValues, 
pconn.getKeyValueBuilder().getKeyValueComparator());
+
keyValues.sort(pconn.getKeyValueBuilder().getKeyValu

[phoenix] branch master updated: PHOENIX-5228 use slf4j for logging in phoenix project (addendum)

2019-07-11 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 02304e6  PHOENIX-5228 use slf4j for logging in phoenix project 
(addendum)
02304e6 is described below

commit 02304e6390bcba908af21da2dd124f188b9fc1e4
Author: Xinyi 
AuthorDate: Sun Jun 16 16:34:11 2019 -0700

PHOENIX-5228 use slf4j for logging in phoenix project (addendum)

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/index/MutableIndexIT.java  |   4 +-
 .../hbase/ipc/PhoenixRpcSchedulerFactory.java  |   8 +-
 .../java/org/apache/phoenix/cache/GlobalCache.java |  11 +--
 .../apache/phoenix/cache/ServerCacheClient.java|  19 ++--
 .../org/apache/phoenix/cache/TenantCacheImpl.java  |   6 +-
 .../cache/aggcache/SpillableGroupByCache.java  |   8 +-
 .../org/apache/phoenix/compile/FromCompiler.java   |  11 ++-
 .../GroupedAggregateRegionObserver.java|  16 ++--
 .../phoenix/coprocessor/MetaDataEndpointImpl.java  |   6 +-
 .../coprocessor/MetaDataRegionObserver.java| 100 +++--
 .../coprocessor/PhoenixAccessController.java   |   6 +-
 .../phoenix/coprocessor/TaskRegionObserver.java|  24 ++---
 .../coprocessor/tasks/DropChildViewsTask.java  |   3 +-
 .../coprocessor/tasks/IndexRebuildTask.java|   3 +-
 .../org/apache/phoenix/execute/BaseQueryPlan.java  |   6 +-
 .../org/apache/phoenix/execute/HashJoinPlan.java   |   6 +-
 .../expression/function/CollationKeyFunction.java  |   6 +-
 .../org/apache/phoenix/hbase/index/Indexer.java|  18 ++--
 .../hbase/index/util/IndexManagementUtil.java  |   3 +-
 .../index/write/ParallelWriterIndexCommitter.java  |   1 -
 .../hbase/index/write/RecoveryIndexWriter.java |   3 +-
 .../phoenix/index/PhoenixIndexFailurePolicy.java   |  19 ++--
 .../apache/phoenix/jdbc/PhoenixEmbeddedDriver.java |   5 +-
 .../org/apache/phoenix/jdbc/PhoenixStatement.java  |  12 ++-
 .../apache/phoenix/log/QueryLoggerDisruptor.java   |   5 +-
 .../apache/phoenix/mapreduce/OrphanViewTool.java   |   3 +-
 .../phoenix/mapreduce/PhoenixRecordReader.java |   9 +-
 .../apache/phoenix/mapreduce/index/IndexTool.java  |   7 +-
 .../index/PhoenixIndexImportDirectReducer.java |   3 +-
 .../index/PhoenixIndexPartialBuildMapper.java  |   3 +-
 .../index/PhoenixServerBuildIndexMapper.java   |   4 -
 .../index/automation/PhoenixMRJobSubmitter.java|   3 +-
 .../monitoring/GlobalMetricRegistriesAdapter.java  |   6 +-
 .../phoenix/query/ConnectionQueryServicesImpl.java |  33 ---
 .../schema/stats/DefaultStatisticsCollector.java   |   3 +-
 .../phoenix/schema/stats/StatisticsScanner.java|  12 ++-
 .../java/org/apache/phoenix/trace/TraceReader.java |   4 +-
 .../phoenix/util/EquiDepthStreamHistogram.java |   3 +-
 38 files changed, 236 insertions(+), 166 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
index 43526a2..2f7b1c9 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexIT.java
@@ -50,12 +50,14 @@ import org.junit.Test;
 import org.junit.runner.RunWith;
 import org.junit.runners.Parameterized;
 import org.junit.runners.Parameterized.Parameters;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
 
 import com.google.common.primitives.Doubles;
 
 @RunWith(Parameterized.class)
 public class MutableIndexIT extends ParallelStatsDisabledIT {
-
+private static final Logger LOGGER = 
LoggerFactory.getLogger(MutableIndexIT.class);
 protected final boolean localIndex;
 private final String tableDDLOptions;
 
diff --git 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
index fbec7b8..0d15b63 100644
--- 
a/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
+++ 
b/phoenix-core/src/main/java/org/apache/hadoop/hbase/ipc/PhoenixRpcSchedulerFactory.java
@@ -26,8 +26,6 @@ import org.apache.phoenix.query.QueryServices;
 import org.apache.phoenix.query.QueryServicesOptions;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
-import org.slf4j.Marker;
-import org.slf4j.MarkerFactory;
 
 import com.google.common.base.Preconditions;
 
@@ -37,8 +35,8 @@ import com.google.common.base.Preconditions;
  */
 public class PhoenixRpcSchedulerFactory implements RpcSchedulerFactory {
 
-private static final Logger LOGGER = 
LoggerFactory.getLogger(PhoenixRpcSchedulerFactory.class);
-private static final Marker fatal = MarkerFactory.getMarker("FATAL");
+private static final Log

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 8805111  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
8805111 is described below

commit 8805111f1fd10372b738d4387dcbd78f5b47bc6d
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 117 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +---
 2 files changed, 109 insertions(+), 78 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 9f12d39..d42ea28 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -89,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -123,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -140,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -180,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -282,36 +285,25 @@ public class SystemCatalogCreationOnConnectionIT {
 assertEquals(0, countUpgradeAttempts);
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping disabled
-// Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
-//
-// A third connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
+// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will not create any
+// SYSTEM tables. The second connection has client-side namespace mapping 
enabled
+// Expected: We create SYSTEM:.* t

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new d5becfa  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
d5becfa is described below

commit d5becfa987a3b9ff2dc401fcfce6ae593bca42d9
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 117 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +---
 2 files changed, 109 insertions(+), 78 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 9f12d39..d42ea28 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -89,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -123,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -140,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -180,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -282,36 +285,25 @@ public class SystemCatalogCreationOnConnectionIT {
 assertEquals(0, countUpgradeAttempts);
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping disabled
-// Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
-//
-// A third connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
+// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will not create any
+// SYSTEM tables. The second connection has client-side namespace mapping 
enabled
+// Expected: We create SYSTEM:.* t

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 719d44e  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
719d44e is described below

commit 719d44e6ff8834d70ebd48752db737acdcdd8cea
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 117 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +---
 2 files changed, 109 insertions(+), 78 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 9f12d39..d42ea28 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -89,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -123,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -140,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -180,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -282,36 +285,25 @@ public class SystemCatalogCreationOnConnectionIT {
 assertEquals(0, countUpgradeAttempts);
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping disabled
-// Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
-//
-// A third connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
+// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will not create any
+// SYSTEM tables. The second connection has client-side namespace mapping 
enabled
+// Expected: We create SYSTEM:.* t

[phoenix] branch master updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-18 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 62a7927  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
62a7927 is described below

commit 62a7927f0af57520abba56105a249f2f85b70ff0
Author: Chinmay Kulkarni 
AuthorDate: Thu Jul 18 12:36:51 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 117 +
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +---
 2 files changed, 109 insertions(+), 78 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 9ffd2d2..de047a3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -89,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -123,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -140,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -180,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -282,36 +285,25 @@ public class SystemCatalogCreationOnConnectionIT {
 assertEquals(0, countUpgradeAttempts);
 }
 
-// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will create only SYSTEM.CATALOG,
-// the second connection has client-side namespace mapping disabled
-// Expected: Throw Inconsistent namespace mapping exception when you check 
client-server compatibility
-//
-// A third connection has client-side namespace mapping enabled
-// Expected: We will migrate SYSTEM.CATALOG to SYSTEM namespace and create 
all other SYSTEM:.* tables
+// Conditions: server-side namespace mapping is enabled, the first 
connection to the server will not create any
+// SYSTEM tables. The second connection has client-side namespace mapping 
enabled
+// Expected: We create SYSTEM:.* tables
 

[phoenix] branch master updated: PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

2019-07-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new aafe9bb  PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer
aafe9bb is described below

commit aafe9bb7d98c3c01f0689ad7e4e0a1ec50c4aa7b
Author: Xinyi 
AuthorDate: Wed Jun 19 14:45:40 2019 -0700

PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/compile/WhereOptimizer.java | 310 -
 1 file changed, 180 insertions(+), 130 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0964d9d..9ca2056 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import java.util.NoSuchElementException;
 import java.util.Set;
 
+import org.apache.hadoop.hbase.filter.CompareFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -656,47 +657,8 @@ public class WhereOptimizer {
 final List extractNodes = 
Collections.singletonList(node);
 final KeyPart childPart = slot.getKeyPart();
 final ImmutableBytesWritable ptr = context.getTempPtr();
-return new SingleKeySlot(new KeyPart() {
-
-@Override
-public KeyRange getKeyRange(CompareOp op, Expression rhs) {
-KeyRange range = childPart.getKeyRange(op, rhs);
-byte[] lower = range.getLowerRange();
-if (!range.lowerUnbound()) {
-ptr.set(lower);
-// Do the reverse translation so we can optimize out 
the coerce expression
-// For the actual type of the coerceBytes call, we use 
the node type instead of the rhs type, because
-// for IN, the rhs type will be VARBINARY and no 
coerce will be done in that case (and we need it to
-// be done).
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-lower = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-byte[] upper = range.getUpperRange();
-if (!range.upperUnbound()) {
-ptr.set(upper);
-// Do the reverse translation so we can optimize out 
the coerce expression
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-upper = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-range = KeyRange.getKeyRange(lower, 
range.isLowerInclusive(), upper, range.isUpperInclusive());
-return range;
-}
-
-@Override
-public List getExtractNodes() {
-return extractNodes;
-}
-
-@Override
-public PColumn getColumn() {
-return childPart.getColumn();
-}
-
-@Override
-public PTable getTable() {
-return childPart.getTable();
-}
-}, slot.getPKPosition(), slot.getKeyRanges());
+return new SingleKeySlot(new CoerceKeySlot(
+childPart, ptr, node, extractNodes), slot.getPKPosition(), 
slot.getKeyRanges());
 }
 
 /**
@@ -1929,7 +1891,67 @@ public class WhereOptimizer {
 return table;
 }
 }
-
+
+private static class CoerceKeySlot implements KeyPart {
+
+private final KeyPart childPart;
+private final ImmutableBytesWritable ptr;
+private final CoerceExpression node;
+private final List extractNodes;
+
+public CoerceKeySlot(KeyPart childPart, ImmutableBytesWritable ptr,
+ CoerceExpression node, List 
extractNodes) {
+this.childPart = childPart;
+this.ptr = ptr;
+this.node = node;
+this.extractNodes = extractNodes;
+}
+
+@Override
+public KeyRange getKeyRange(CompareOp op, Expression rhs) {
+KeyRange range = childPart.getKeyRange(op, rhs);
+byte[] lower = range.getLowerRange();
+if (!range.lowerUnbound()) {
+ptr.set(lower

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

2019-07-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new ddfa105  PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer
ddfa105 is described below

commit ddfa1051ed86051d176077050a75fad91ae43451
Author: Xinyi 
AuthorDate: Wed Jun 19 14:45:40 2019 -0700

PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/compile/WhereOptimizer.java | 310 -
 1 file changed, 180 insertions(+), 130 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0964d9d..9ca2056 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import java.util.NoSuchElementException;
 import java.util.Set;
 
+import org.apache.hadoop.hbase.filter.CompareFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -656,47 +657,8 @@ public class WhereOptimizer {
 final List extractNodes = 
Collections.singletonList(node);
 final KeyPart childPart = slot.getKeyPart();
 final ImmutableBytesWritable ptr = context.getTempPtr();
-return new SingleKeySlot(new KeyPart() {
-
-@Override
-public KeyRange getKeyRange(CompareOp op, Expression rhs) {
-KeyRange range = childPart.getKeyRange(op, rhs);
-byte[] lower = range.getLowerRange();
-if (!range.lowerUnbound()) {
-ptr.set(lower);
-// Do the reverse translation so we can optimize out 
the coerce expression
-// For the actual type of the coerceBytes call, we use 
the node type instead of the rhs type, because
-// for IN, the rhs type will be VARBINARY and no 
coerce will be done in that case (and we need it to
-// be done).
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-lower = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-byte[] upper = range.getUpperRange();
-if (!range.upperUnbound()) {
-ptr.set(upper);
-// Do the reverse translation so we can optimize out 
the coerce expression
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-upper = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-range = KeyRange.getKeyRange(lower, 
range.isLowerInclusive(), upper, range.isUpperInclusive());
-return range;
-}
-
-@Override
-public List getExtractNodes() {
-return extractNodes;
-}
-
-@Override
-public PColumn getColumn() {
-return childPart.getColumn();
-}
-
-@Override
-public PTable getTable() {
-return childPart.getTable();
-}
-}, slot.getPKPosition(), slot.getKeyRanges());
+return new SingleKeySlot(new CoerceKeySlot(
+childPart, ptr, node, extractNodes), slot.getPKPosition(), 
slot.getKeyRanges());
 }
 
 /**
@@ -1929,7 +1891,67 @@ public class WhereOptimizer {
 return table;
 }
 }
-
+
+private static class CoerceKeySlot implements KeyPart {
+
+private final KeyPart childPart;
+private final ImmutableBytesWritable ptr;
+private final CoerceExpression node;
+private final List extractNodes;
+
+public CoerceKeySlot(KeyPart childPart, ImmutableBytesWritable ptr,
+ CoerceExpression node, List 
extractNodes) {
+this.childPart = childPart;
+this.ptr = ptr;
+this.node = node;
+this.extractNodes = extractNodes;
+}
+
+@Override
+public KeyRange getKeyRange(CompareOp op, Expression rhs) {
+KeyRange range = childPart.getKeyRange(op, rhs);
+byte[] lower = range.getLowerRange();
+if (!range.lowerUnbound

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

2019-07-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 2a670b0  PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer
2a670b0 is described below

commit 2a670b018ae2f4b6e59c8a17cd03a3cbb4dc546e
Author: Xinyi 
AuthorDate: Wed Jun 19 14:45:40 2019 -0700

PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/compile/WhereOptimizer.java | 310 -
 1 file changed, 180 insertions(+), 130 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0964d9d..9ca2056 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import java.util.NoSuchElementException;
 import java.util.Set;
 
+import org.apache.hadoop.hbase.filter.CompareFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -656,47 +657,8 @@ public class WhereOptimizer {
 final List extractNodes = 
Collections.singletonList(node);
 final KeyPart childPart = slot.getKeyPart();
 final ImmutableBytesWritable ptr = context.getTempPtr();
-return new SingleKeySlot(new KeyPart() {
-
-@Override
-public KeyRange getKeyRange(CompareOp op, Expression rhs) {
-KeyRange range = childPart.getKeyRange(op, rhs);
-byte[] lower = range.getLowerRange();
-if (!range.lowerUnbound()) {
-ptr.set(lower);
-// Do the reverse translation so we can optimize out 
the coerce expression
-// For the actual type of the coerceBytes call, we use 
the node type instead of the rhs type, because
-// for IN, the rhs type will be VARBINARY and no 
coerce will be done in that case (and we need it to
-// be done).
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-lower = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-byte[] upper = range.getUpperRange();
-if (!range.upperUnbound()) {
-ptr.set(upper);
-// Do the reverse translation so we can optimize out 
the coerce expression
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-upper = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-range = KeyRange.getKeyRange(lower, 
range.isLowerInclusive(), upper, range.isUpperInclusive());
-return range;
-}
-
-@Override
-public List getExtractNodes() {
-return extractNodes;
-}
-
-@Override
-public PColumn getColumn() {
-return childPart.getColumn();
-}
-
-@Override
-public PTable getTable() {
-return childPart.getTable();
-}
-}, slot.getPKPosition(), slot.getKeyRanges());
+return new SingleKeySlot(new CoerceKeySlot(
+childPart, ptr, node, extractNodes), slot.getPKPosition(), 
slot.getKeyRanges());
 }
 
 /**
@@ -1929,7 +1891,67 @@ public class WhereOptimizer {
 return table;
 }
 }
-
+
+private static class CoerceKeySlot implements KeyPart {
+
+private final KeyPart childPart;
+private final ImmutableBytesWritable ptr;
+private final CoerceExpression node;
+private final List extractNodes;
+
+public CoerceKeySlot(KeyPart childPart, ImmutableBytesWritable ptr,
+ CoerceExpression node, List 
extractNodes) {
+this.childPart = childPart;
+this.ptr = ptr;
+this.node = node;
+this.extractNodes = extractNodes;
+}
+
+@Override
+public KeyRange getKeyRange(CompareOp op, Expression rhs) {
+KeyRange range = childPart.getKeyRange(op, rhs);
+byte[] lower = range.getLowerRange();
+if (!range.lowerUnbound

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

2019-07-22 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 7942245  PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer
7942245 is described below

commit 79422452fc5be88f939c06bdcee0974cd83b3bd2
Author: Xinyi 
AuthorDate: Wed Jun 19 14:45:40 2019 -0700

PHOENIX-5360 Cleanup anonymous inner classes in WhereOptimizer

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/compile/WhereOptimizer.java | 310 -
 1 file changed, 180 insertions(+), 130 deletions(-)

diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java 
b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
index 0964d9d..9ca2056 100644
--- a/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
+++ b/phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java
@@ -29,6 +29,7 @@ import java.util.Map;
 import java.util.NoSuchElementException;
 import java.util.Set;
 
+import org.apache.hadoop.hbase.filter.CompareFilter;
 import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
 import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
 import org.apache.hadoop.hbase.util.Bytes;
@@ -656,47 +657,8 @@ public class WhereOptimizer {
 final List extractNodes = 
Collections.singletonList(node);
 final KeyPart childPart = slot.getKeyPart();
 final ImmutableBytesWritable ptr = context.getTempPtr();
-return new SingleKeySlot(new KeyPart() {
-
-@Override
-public KeyRange getKeyRange(CompareOp op, Expression rhs) {
-KeyRange range = childPart.getKeyRange(op, rhs);
-byte[] lower = range.getLowerRange();
-if (!range.lowerUnbound()) {
-ptr.set(lower);
-// Do the reverse translation so we can optimize out 
the coerce expression
-// For the actual type of the coerceBytes call, we use 
the node type instead of the rhs type, because
-// for IN, the rhs type will be VARBINARY and no 
coerce will be done in that case (and we need it to
-// be done).
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-lower = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-byte[] upper = range.getUpperRange();
-if (!range.upperUnbound()) {
-ptr.set(upper);
-// Do the reverse translation so we can optimize out 
the coerce expression
-node.getChild().getDataType().coerceBytes(ptr, 
node.getDataType(), rhs.getSortOrder(), SortOrder.ASC);
-upper = ByteUtil.copyKeyBytesIfNecessary(ptr);
-}
-range = KeyRange.getKeyRange(lower, 
range.isLowerInclusive(), upper, range.isUpperInclusive());
-return range;
-}
-
-@Override
-public List getExtractNodes() {
-return extractNodes;
-}
-
-@Override
-public PColumn getColumn() {
-return childPart.getColumn();
-}
-
-@Override
-public PTable getTable() {
-return childPart.getTable();
-}
-}, slot.getPKPosition(), slot.getKeyRanges());
+return new SingleKeySlot(new CoerceKeySlot(
+childPart, ptr, node, extractNodes), slot.getPKPosition(), 
slot.getKeyRanges());
 }
 
 /**
@@ -1929,7 +1891,67 @@ public class WhereOptimizer {
 return table;
 }
 }
-
+
+private static class CoerceKeySlot implements KeyPart {
+
+private final KeyPart childPart;
+private final ImmutableBytesWritable ptr;
+private final CoerceExpression node;
+private final List extractNodes;
+
+public CoerceKeySlot(KeyPart childPart, ImmutableBytesWritable ptr,
+ CoerceExpression node, List 
extractNodes) {
+this.childPart = childPart;
+this.ptr = ptr;
+this.node = node;
+this.extractNodes = extractNodes;
+}
+
+@Override
+public KeyRange getKeyRange(CompareOp op, Expression rhs) {
+KeyRange range = childPart.getKeyRange(op, rhs);
+byte[] lower = range.getLowerRange();
+if (!range.lowerUnbound

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-24 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 3aee060  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
3aee060 is described below

commit 3aee060cfed5a3182c13e19962db2f64a96eae3b
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 168 -
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +++--
 2 files changed, 121 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 59af533..99f1216 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -47,7 +50,11 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver;
 import org.apache.phoenix.jdbc.PhoenixTestDriver;
-import org.apache.phoenix.query.*;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesTestImpl;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.UpgradeUtil;
 import org.junit.After;
@@ -85,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -119,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -136,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -176,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -184,23 +191,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connec

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5302: Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-07-24 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new e7d9654  PHOENIX-5302: Different isNamespaceMappingEnabled for server 
/ client causes TableNotFoundException
e7d9654 is described below

commit e7d965401ae093317f3b8f2d3ff912e4a6abd392
Author: Chinmay Kulkarni 
AuthorDate: Tue Jul 16 16:24:30 2019 -0700

PHOENIX-5302: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
---
 .../SystemCatalogCreationOnConnectionIT.java   | 168 -
 .../phoenix/query/ConnectionQueryServicesImpl.java |  70 +++--
 2 files changed, 121 insertions(+), 117 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
index 59af533..99f1216 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/SystemCatalogCreationOnConnectionIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.SYSTEM_CATALOG_SCHEMA;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import static 
org.apache.phoenix.query.BaseTest.generateUniqueName;
 import java.io.IOException;
 import java.sql.Connection;
 import java.sql.DriverManager;
+import java.sql.ResultSet;
 import java.sql.SQLException;
 import java.util.Arrays;
 import java.util.HashMap;
@@ -39,6 +41,7 @@ import org.apache.hadoop.conf.Configuration;
 import org.apache.hadoop.hbase.HBaseTestingUtility;
 import org.apache.hadoop.hbase.HConstants;
 import org.apache.hadoop.hbase.HTableDescriptor;
+import org.apache.hadoop.hbase.NamespaceNotFoundException;
 import org.apache.hadoop.hbase.TableName;
 import org.apache.phoenix.coprocessor.MetaDataProtocol;
 import org.apache.phoenix.exception.SQLExceptionCode;
@@ -47,7 +50,11 @@ import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDriver;
 import org.apache.phoenix.jdbc.PhoenixEmbeddedDriver;
 import org.apache.phoenix.jdbc.PhoenixTestDriver;
-import org.apache.phoenix.query.*;
+import org.apache.phoenix.query.ConnectionQueryServices;
+import org.apache.phoenix.query.ConnectionQueryServicesImpl;
+import org.apache.phoenix.query.QueryConstants;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.query.QueryServicesTestImpl;
 import org.apache.phoenix.util.ReadOnlyProps;
 import org.apache.phoenix.util.UpgradeUtil;
 import org.junit.After;
@@ -85,7 +92,7 @@ public class SystemCatalogCreationOnConnectionIT {
 
 private static class PhoenixSysCatCreationServices extends 
ConnectionQueryServicesImpl {
 
-public PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
+PhoenixSysCatCreationServices(QueryServices services, 
PhoenixEmbeddedDriver.ConnectionInfo connectionInfo, Properties info) {
 super(services, connectionInfo, info);
 }
 
@@ -119,7 +126,7 @@ public class SystemCatalogCreationOnConnectionIT {
 private ConnectionQueryServices cqs;
 private final ReadOnlyProps overrideProps;
 
-public PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
+PhoenixSysCatCreationTestingDriver(ReadOnlyProps props) {
 overrideProps = props;
 }
 
@@ -136,7 +143,7 @@ public class SystemCatalogCreationOnConnectionIT {
 // used ConnectionQueryServices instance. This is used only in cases 
where we need to test server-side
 // changes and don't care about client-side properties set from the 
init method.
 // Reset the Connection Query Services instance so we can create a new 
connection to the cluster
-public void resetCQS() {
+void resetCQS() {
 cqs = null;
 }
 }
@@ -176,7 +183,7 @@ public class SystemCatalogCreationOnConnectionIT {
 driver.getConnectionQueryServices(getJdbcUrl(), 
propsDoNotUpgradePropSet);
 hbaseTables = getHBaseTables();
 assertFalse(hbaseTables.contains(PHOENIX_SYSTEM_CATALOG) || 
hbaseTables.contains(PHOENIX_NAMESPACE_MAPPED_SYSTEM_CATALOG));
-assertTrue(hbaseTables.size() == 0);
+assertEquals(0, hbaseTables.size());
 assertEquals(1, countUpgradeAttempts);
 }
 
@@ -184,23 +191,6 @@ public class SystemCatalogCreationOnConnectionIT {
 /* Testing SYSTEM.CATALOG/SYSTEM:CATALOG 
creation/upgrade behavior for subsequent connec

[phoenix] branch master updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 601e4c8  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
601e4c8 is described below

commit 601e4c82f66aaab180a98132155b2131fe479cc9
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 267 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 275 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..f1b23f9
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends SplitSystemCatalogIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = generateUniqueName();
+  String viewIndex3 = generateUniqueName();
+  String viewIndex4 = generateUniqueName();
+  String viewIndex5 = generateUniqueName();
+
+  String fullNameViewIndex1 = SchemaUtil.getTableName(viewSchemaName1, 
viewI

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 8eca53d  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
8eca53d is described below

commit 8eca53da0db123457d10bd607538a91cabf2a85d
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 267 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 275 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..f1b23f9
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends SplitSystemCatalogIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = generateUniqueName();
+  String viewIndex3 = generateUniqueName();
+  String viewIndex4 = generateUniqueName();
+  String viewIndex5 = generateUniqueName();
+
+  String fullNameViewIndex1 = SchemaUtil.getTableName(view

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 34d9675  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
34d9675 is described below

commit 34d9675a8e63ee0e1a2c0dbf9c001d61d59f7564
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 267 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 275 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..f1b23f9
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends SplitSystemCatalogIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = generateUniqueName();
+  String viewIndex3 = generateUniqueName();
+  String viewIndex4 = generateUniqueName();
+  String viewIndex5 = generateUniqueName();
+
+  String fullNameViewIndex1 = SchemaUtil.getTableName(view

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 2bd08cd  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
2bd08cd is described below

commit 2bd08cdc7b573e464ce967298faf643d478f8a64
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 267 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 275 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..f1b23f9
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,267 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.schema.PTable;
+import org.apache.phoenix.schema.PTableKey;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends SplitSystemCatalogIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = generateUniqueName();
+  String viewIndex3 = generateUniqueName();
+  String viewIndex4 = generateUniqueName();
+  String viewIndex5 = generateUniqueName();
+
+  String fullNameViewIndex1 = SchemaUtil.getTableName(view

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new fddcd55  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
fddcd55 is described below

commit fddcd55a02b75fe32ecf01192e32d892436871ea
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 261 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 269 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..1e1a31b
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends ParallelStatsDisabledIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+  private static final String TENANT1 = "tenant1";
+  private static final String SCHEMA1 = "schema1";
+  private static final String SCHEMA2 = "schema2";
+  private static final String SCHEMA3 = "schema3";
+  private static final String SCHEMA4 = "schema4";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = gene

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with list of Table Refs

2019-07-25 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new d9e2d0c  PHOENIX-5391 : MetadataClient - TenantId Map is not correctly 
updated with list of Table Refs
d9e2d0c is described below

commit d9e2d0c8523c608f7c64578c2f2a7f13a2851b68
Author: Viraj Jasani 
AuthorDate: Sun Jul 14 18:24:18 2019 +0530

PHOENIX-5391 : MetadataClient - TenantId Map is not correctly updated with 
list of Table Refs

Signed-off-by: Chinmay Kulkarni 
---
 .../apache/phoenix/end2end/DropIndexedColsIT.java  | 261 +
 .../org/apache/phoenix/schema/MetaDataClient.java  |  14 +-
 2 files changed, 269 insertions(+), 6 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
new file mode 100644
index 000..1e1a31b
--- /dev/null
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropIndexedColsIT.java
@@ -0,0 +1,261 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.phoenix.end2end;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.apache.phoenix.jdbc.PhoenixConnection;
+import org.apache.phoenix.schema.ColumnNotFoundException;
+import org.apache.phoenix.schema.PName;
+import org.apache.phoenix.schema.PNameFactory;
+import org.apache.phoenix.util.SchemaUtil;
+import org.junit.Assert;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+
+import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
+
+@RunWith(Parameterized.class)
+public class DropIndexedColsIT extends ParallelStatsDisabledIT {
+
+  private static final String CREATE_TABLE_COL_QUERY = " (%s k VARCHAR NOT 
NULL, v1 VARCHAR, " +
+  "v2 VARCHAR, v3 VARCHAR, v4 VARCHAR, v5 VARCHAR CONSTRAINT PK PRIMARY 
KEY(%s k))%s";
+  private static final String CREATE_TABLE = "CREATE TABLE ";
+  private static final String CREATE_VIEW = "CREATE VIEW ";
+  private static final String CREATE_INDEX = "CREATE INDEX ";
+  private static final String SELECT_ALL_FROM = "SELECT * FROM ";
+  private static final String UPSERT_INTO = "UPSERT INTO ";
+  private static final String ALTER_TABLE = "ALTER TABLE ";
+  private static final String TENANT1 = "tenant1";
+  private static final String SCHEMA1 = "schema1";
+  private static final String SCHEMA2 = "schema2";
+  private static final String SCHEMA3 = "schema3";
+  private static final String SCHEMA4 = "schema4";
+
+  private final boolean salted;
+  private final String TENANT_SPECIFIC_URL = getUrl() + ';' + TENANT_ID_ATTRIB 
+ "=" + TENANT1;
+
+  public DropIndexedColsIT(boolean salted) {
+this.salted = salted;
+  }
+
+  @Parameterized.Parameters(name = "DropIndexedColsIT_salted={0}")
+  public static Collection data() {
+return Arrays.asList(false, true);
+  }
+
+  @Test
+  public void testDropIndexedColsMultiTables() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Connection viewConn = 
DriverManager.getConnection(TENANT_SPECIFIC_URL)) {
+  String tableWithView1 = SchemaUtil.getTableName(SCHEMA1, 
generateUniqueName());
+  String tableWithView2 = SchemaUtil.getTableName(SCHEMA3, 
generateUniqueName());
+
+  String viewOfTable1 = SchemaUtil.getTableName(SCHEMA2, 
generateUniqueName());
+  String viewOfTable2 = SchemaUtil.getTableName(SCHEMA4, 
generateUniqueName());
+
+  String viewSchemaName1 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable1);
+  String viewSchemaName2 = 
SchemaUtil.getSchemaNameFromFullName(viewOfTable2);
+
+  String viewIndex1 = generateUniqueName();
+  String viewIndex2 = gene

[phoenix-connectors] branch master updated: PHOENIX-5410 Phoenix spark to hbase connector takes long time persist data

2019-08-02 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix-connectors.git


The following commit(s) were added to refs/heads/master by this push:
 new 4a4308a  PHOENIX-5410 Phoenix spark to hbase connector takes long time 
persist data
4a4308a is described below

commit 4a4308a86c2224a2cf0bd9efb0f35df2680b556b
Author: Manohar Chamaraju 
AuthorDate: Fri Aug 2 15:43:30 2019 +0530

PHOENIX-5410 Phoenix spark to hbase connector takes long time persist data

Signed-off-by: Chinmay Kulkarni 
---
 .../spark/datasource/v2/writer/PhoenixDataWriter.java  | 18 --
 .../sql/execution/datasources/jdbc/SparkJdbcUtil.scala |  4 ++--
 2 files changed, 18 insertions(+), 4 deletions(-)

diff --git 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriter.java
 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriter.java
index 04670d5..f67695c 100644
--- 
a/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriter.java
+++ 
b/phoenix-spark/src/main/java/org/apache/phoenix/spark/datasource/v2/writer/PhoenixDataWriter.java
@@ -22,6 +22,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.PreparedStatement;
 import java.sql.SQLException;
+import java.util.ArrayList;
 import java.util.List;
 import java.util.Properties;
 import java.util.stream.Collectors;
@@ -32,6 +33,8 @@ import org.apache.phoenix.util.QueryUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.spark.sql.Row;
 import org.apache.spark.sql.catalyst.InternalRow;
+import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder;
+import org.apache.spark.sql.catalyst.encoders.RowEncoder$;
 import org.apache.spark.sql.execution.datasources.SparkJdbcUtil;
 import org.apache.spark.sql.execution.datasources.jdbc.PhoenixJdbcDialect$;
 import org.apache.spark.sql.sources.v2.writer.DataWriter;
@@ -39,6 +42,9 @@ import 
org.apache.spark.sql.sources.v2.writer.WriterCommitMessage;
 import org.apache.spark.sql.types.DataType;
 import org.apache.spark.sql.types.StructField;
 import org.apache.spark.sql.types.StructType;
+import org.apache.spark.sql.catalyst.analysis.SimpleAnalyzer$;
+import org.apache.spark.sql.catalyst.expressions.AttributeReference;
+import org.apache.spark.sql.catalyst.expressions.Attribute;
 
 import com.google.common.collect.Lists;
 
@@ -55,6 +61,7 @@ public class PhoenixDataWriter implements 
DataWriter {
 private final PreparedStatement statement;
 private final long batchSize;
 private long numRecords = 0;
+private ExpressionEncoder encoder = null;
 
 PhoenixDataWriter(PhoenixDataSourceWriteOptions options) {
 String scn = options.getScn();
@@ -68,6 +75,13 @@ public class PhoenixDataWriter implements 
DataWriter {
 overridingProps.put(PhoenixRuntime.TENANT_ID_ATTRIB, tenantId);
 }
 this.schema = options.getSchema();
+
+List attrs = new ArrayList<>();
+
+for (AttributeReference ref : 
scala.collection.JavaConverters.seqAsJavaListConverter(schema.toAttributes()).asJava())
 {
+ attrs.add(ref.toAttribute());
+}
+encoder = RowEncoder$.MODULE$.apply(schema).resolveAndBind( 
scala.collection.JavaConverters.asScalaIteratorConverter(attrs.iterator()).asScala().toSeq(),
 SimpleAnalyzer$.MODULE$);
 try {
 this.conn = DriverManager.getConnection(JDBC_PROTOCOL + 
JDBC_PROTOCOL_SEPARATOR + zkUrl,
 overridingProps);
@@ -92,14 +106,14 @@ public class PhoenixDataWriter implements 
DataWriter {
 public void write(InternalRow internalRow) throws IOException {
 try {
 int i=0;
+Row row = SparkJdbcUtil.toRow(encoder, internalRow);
 for (StructField field : schema.fields()) {
 DataType dataType = field.dataType();
 if (internalRow.isNullAt(i)) {
 statement.setNull(i + 1, 
SparkJdbcUtil.getJdbcType(dataType,
 PhoenixJdbcDialect$.MODULE$).jdbcNullType());
 } else {
-Row row = SparkJdbcUtil.toRow(schema, internalRow);
-SparkJdbcUtil.makeSetter(conn, 
PhoenixJdbcDialect$.MODULE$, dataType).apply(statement, row, i);
+   SparkJdbcUtil.makeSetter(conn, 
PhoenixJdbcDialect$.MODULE$, dataType).apply(statement, row, i);
 }
 ++i;
 }
diff --git 
a/phoenix-spark/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/SparkJdbcUtil.scala
 
b/phoenix-spark/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/SparkJdbcUtil.scala
index 50cdbf5..97b0525 100644
--- 
a/phoenix-spark/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/SparkJdbcUtil.scala
+++ 
b/p

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new f88dbf0  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
f88dbf0 is described below

commit f88dbf008b0c3154e3282af46ccbd2fe5f23916e
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 14:39:44 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 46 +++---
 .../phoenix/expression/LiteralExpression.java  | 11 ++
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 0cb60c2..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -37,10 +38,12 @@ import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
-import org.junit.Ignore;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,12 +673,12 @@ public class Array2IT extends ArrayIT {
 
 }
 
-@Test // see PHOENIX-5416
-@Ignore
-public void testArrayRefToLiteral() throws Exception {
+@Test
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1");
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
@@ -684,6 +687,39 @@ public class Array2IT extends ArrayIT {
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
+
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
+}
 
 @Test
 public void testArrayConstructorWithMultipleRows1() throws Exception {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralEx

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new a15757d  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
a15757d is described below

commit a15757dae90d2648587619b008ef78922fe61a4f
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 14:39:44 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 46 +++---
 .../phoenix/expression/LiteralExpression.java  | 11 ++
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 0cb60c2..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -37,10 +38,12 @@ import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
-import org.junit.Ignore;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,12 +673,12 @@ public class Array2IT extends ArrayIT {
 
 }
 
-@Test // see PHOENIX-5416
-@Ignore
-public void testArrayRefToLiteral() throws Exception {
+@Test
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1");
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
@@ -684,6 +687,39 @@ public class Array2IT extends ArrayIT {
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
+
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
+}
 
 @Test
 public void testArrayConstructorWithMultipleRows1() throws Exception {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralEx

[phoenix] branch master updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new ffcffb0  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
ffcffb0 is described below

commit ffcffb031fa9da661dccae48b94366ddb6238b3f
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 14:39:44 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 46 +++---
 .../phoenix/expression/LiteralExpression.java  | 11 ++
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 0cb60c2..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -37,10 +38,12 @@ import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
-import org.junit.Ignore;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,12 +673,12 @@ public class Array2IT extends ArrayIT {
 
 }
 
-@Test // see PHOENIX-5416
-@Ignore
-public void testArrayRefToLiteral() throws Exception {
+@Test
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1");
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
@@ -684,6 +687,39 @@ public class Array2IT extends ArrayIT {
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
+
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
+}
 
 @Test
 public void testArrayConstructorWithMultipleRows1() throws Exception {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
+++ 
b/pho

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 903ed0e  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
903ed0e is described below

commit 903ed0e3d8021bd2350693f87d11a6e5f35be42d
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 14:39:44 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 46 +++---
 .../phoenix/expression/LiteralExpression.java  | 11 ++
 2 files changed, 52 insertions(+), 5 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 0cb60c2..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -37,10 +38,12 @@ import org.apache.phoenix.schema.types.PhoenixArray;
 import org.apache.phoenix.util.PropertiesUtil;
 import org.apache.phoenix.util.SchemaUtil;
 import org.apache.phoenix.util.StringUtil;
-import org.junit.Ignore;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,12 +673,12 @@ public class Array2IT extends ArrayIT {
 
 }
 
-@Test // see PHOENIX-5416
-@Ignore
-public void testArrayRefToLiteral() throws Exception {
+@Test
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
 try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1");
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
@@ -684,6 +687,39 @@ public class Array2IT extends ArrayIT {
 assertFalse(rs.next());
 }
 }
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
+
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
+}
 
 @Test
 public void testArrayConstructorWithMultipleRows1() throws Exception {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralEx

[phoenix] branch 4.14-HBase-1.3 updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.3 by this push:
 new 8830555  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
8830555 is described below

commit 88305552257b36663bb1d260cdd9384f277c36c3
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 23:18:20 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 51 +-
 .../phoenix/expression/LiteralExpression.java  | 11 +
 2 files changed, 51 insertions(+), 11 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 52bfb86..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -40,6 +41,9 @@ import org.apache.phoenix.util.StringUtil;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,26 +674,51 @@ public class Array2IT extends ArrayIT {
 }
 
 @Test
-public void testArrayRefToLiteral() throws Exception {
-Connection conn;
-
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"catalog\" limit 1");
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
 assertTrue(rs.next());
 assertEquals("b", rs.getString(1));
 assertFalse(rs.next());
-} catch (SQLException e) {
-} finally {
-if (conn != null) {
-conn.close();
-}
 }
+}
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
 
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
 }
 
 @Test
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
+++ 
b/phoenix-core

[phoenix] branch 4.14-HBase-1.4 updated: PHOENIX-5416: Fix Array2IT testArrayRefToLiteral

2019-08-06 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.14-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.14-HBase-1.4 by this push:
 new b8cc92a  PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
b8cc92a is described below

commit b8cc92a64073e8d9b255f070ed11d9f497406cbd
Author: Chinmay Kulkarni 
AuthorDate: Tue Aug 6 23:18:20 2019 -0700

PHOENIX-5416: Fix Array2IT testArrayRefToLiteral
---
 .../java/org/apache/phoenix/end2end/Array2IT.java  | 51 +-
 .../phoenix/expression/LiteralExpression.java  | 11 +
 2 files changed, 51 insertions(+), 11 deletions(-)

diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
index 52bfb86..9386cde 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/Array2IT.java
@@ -18,6 +18,7 @@
 package org.apache.phoenix.end2end;
 
 import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES;
+import static org.junit.Assert.assertArrayEquals;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertFalse;
 import static org.junit.Assert.assertNull;
@@ -40,6 +41,9 @@ import org.apache.phoenix.util.StringUtil;
 import org.junit.Test;
 
 public class Array2IT extends ArrayIT {
+
+private static final String TEST_QUERY = "select ?[2] from 
\"SYSTEM\".\"CATALOG\" limit 1";
+
 @Test
 public void testFixedWidthCharArray() throws Exception {
 Connection conn;
@@ -670,26 +674,51 @@ public class Array2IT extends ArrayIT {
 }
 
 @Test
-public void testArrayRefToLiteral() throws Exception {
-Connection conn;
-
+public void testArrayRefToLiteralCharArraySameLengths() throws Exception {
 Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
-conn = DriverManager.getConnection(getUrl(), props);
-try {
-PreparedStatement stmt = conn.prepareStatement("select ?[2] from 
\"SYSTEM\".\"catalog\" limit 1");
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having same lengths
 Array array = conn.createArrayOf("CHAR", new String[] 
{"a","b","c"});
 stmt.setArray(1, array);
 ResultSet rs = stmt.executeQuery();
 assertTrue(rs.next());
 assertEquals("b", rs.getString(1));
 assertFalse(rs.next());
-} catch (SQLException e) {
-} finally {
-if (conn != null) {
-conn.close();
-}
 }
+}
+
+@Test
+public void testArrayRefToLiteralCharArrayDiffLengths() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the char array having different 
lengths
+Array array = conn.createArrayOf("CHAR", new String[] 
{"a","bb","ccc"});
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+assertEquals("bb", rs.getString(1));
+assertFalse(rs.next());
+}
+}
 
+@Test
+public void testArrayRefToLiteralBinaryArray() throws Exception {
+Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
+try (Connection conn = DriverManager.getConnection(getUrl(), props)) {
+PreparedStatement stmt = conn.prepareStatement(TEST_QUERY);
+// Test with each element of the binary array having different 
lengths
+byte[][] bytes = {{0,0,1}, {0,0,2,0}, {0,0,0,3,4}};
+Array array = conn.createArrayOf("BINARY", bytes);
+stmt.setArray(1, array);
+ResultSet rs = stmt.executeQuery();
+assertTrue(rs.next());
+// Note that all elements are padded to be of the same length
+// as the longest element of the byte array
+assertArrayEquals(new byte[] {0,0,2,0,0}, rs.getBytes(1));
+assertFalse(rs.next());
+}
 }
 
 @Test
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
index 110177a..de15164 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/expression/LiteralExpression.java
+++ 
b/phoenix-core

[phoenix] branch master updated: PHOENIX-5348: Fix flaky test: testIndexRebuildTask

2019-08-16 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 4f8b3c2  PHOENIX-5348: Fix flaky test: testIndexRebuildTask
4f8b3c2 is described below

commit 4f8b3c2207a56c4afaf2502c0af26a1f4b63f2bb
Author: Gokcen Iskender 
AuthorDate: Thu Jul 25 11:04:18 2019 -0700

PHOENIX-5348: Fix flaky test: testIndexRebuildTask

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/DropTableWithViewsIT.java  | 33 -
 .../apache/phoenix/end2end/IndexRebuildTaskIT.java | 86 +++---
 .../phoenix/end2end/index/IndexMetadataIT.java |  5 +-
 .../phoenix/coprocessor/TaskRegionObserver.java| 12 ++-
 .../coprocessor/tasks/IndexRebuildTask.java|  4 +-
 .../index/PhoenixIndexImportDirectReducer.java |  7 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 .../java/org/apache/phoenix/schema/task/Task.java  | 19 -
 8 files changed, 104 insertions(+), 64 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index 6741585..2589fa3 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.util.Arrays;
 import java.util.Collection;
 
@@ -137,7 +139,8 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 task.run();
 task.run();
 
-assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS, null);
+assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS,
+null, null, null, null, null);
 
 // Views should be dropped by now
 TableName linkTable = 
TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES);
@@ -156,7 +159,9 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 }
 }
 
-public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType, String expectedData)
+public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType,
+String expectedTableName, String expectedTenantId, String 
expectedSchema, Timestamp expectedTs,
+String expectedIndexName)
 throws SQLException {
 ResultSet rs = conn.createStatement().executeQuery("SELECT * " +
 " FROM " + PhoenixDatabaseMetaData.SYSTEM_TASK_NAME +
@@ -166,9 +171,29 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 String taskStatus = rs.getString(PhoenixDatabaseMetaData.TASK_STATUS);
 assertEquals(expectedStatus, taskStatus);
 
-if (expectedData != null) {
+if (expectedTableName != null) {
+String tableName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+assertEquals(expectedTableName, tableName);
+}
+
+if (expectedTenantId != null) {
+String tenantId = rs.getString(PhoenixDatabaseMetaData.TENANT_ID);
+assertEquals(expectedTenantId, tenantId);
+}
+
+if (expectedSchema != null) {
+String schema = rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+assertEquals(expectedSchema, schema);
+}
+
+if (expectedTs != null) {
+Timestamp ts = rs.getTimestamp(PhoenixDatabaseMetaData.TASK_TS);
+assertEquals(expectedTs, ts);
+}
+
+if (expectedIndexName != null) {
 String data = rs.getString(PhoenixDatabaseMetaData.TASK_DATA);
-assertEquals(expectedData, data);
+assertEquals(true, data.contains("\"IndexName\":\"" + 
expectedIndexName));
 }
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
index c63cf2c..fc514ef 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
@@ -35,8 +35,6 @@ import 

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-5348: Fix flaky test: testIndexRebuildTask

2019-08-16 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new d91565f  PHOENIX-5348: Fix flaky test: testIndexRebuildTask
d91565f is described below

commit d91565ffc83e59c0b3f2d81d94d55e1ae45d02d4
Author: Gokcen Iskender 
AuthorDate: Thu Jul 25 11:04:18 2019 -0700

PHOENIX-5348: Fix flaky test: testIndexRebuildTask

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/DropTableWithViewsIT.java  | 33 -
 .../apache/phoenix/end2end/IndexRebuildTaskIT.java | 85 +++---
 .../phoenix/end2end/index/IndexMetadataIT.java |  5 +-
 .../phoenix/coprocessor/TaskRegionObserver.java|  9 ++-
 .../coprocessor/tasks/IndexRebuildTask.java|  4 +-
 .../index/PhoenixIndexImportDirectReducer.java |  7 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 .../java/org/apache/phoenix/schema/task/Task.java  | 19 -
 8 files changed, 103 insertions(+), 61 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index 5836c56..6663dde 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.util.Arrays;
 import java.util.Collection;
 
@@ -137,7 +139,8 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 task.run();
 task.run();
 
-assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS, null);
+assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS,
+null, null, null, null, null);
 
 // Views should be dropped by now
 TableName linkTable = 
TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES);
@@ -156,7 +159,9 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 }
 }
 
-public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType, String expectedData)
+public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType,
+String expectedTableName, String expectedTenantId, String 
expectedSchema, Timestamp expectedTs,
+String expectedIndexName)
 throws SQLException {
 ResultSet rs = conn.createStatement().executeQuery("SELECT * " +
 " FROM " + PhoenixDatabaseMetaData.SYSTEM_TASK_NAME +
@@ -166,9 +171,29 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 String taskStatus = rs.getString(PhoenixDatabaseMetaData.TASK_STATUS);
 assertEquals(expectedStatus, taskStatus);
 
-if (expectedData != null) {
+if (expectedTableName != null) {
+String tableName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+assertEquals(expectedTableName, tableName);
+}
+
+if (expectedTenantId != null) {
+String tenantId = rs.getString(PhoenixDatabaseMetaData.TENANT_ID);
+assertEquals(expectedTenantId, tenantId);
+}
+
+if (expectedSchema != null) {
+String schema = rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+assertEquals(expectedSchema, schema);
+}
+
+if (expectedTs != null) {
+Timestamp ts = rs.getTimestamp(PhoenixDatabaseMetaData.TASK_TS);
+assertEquals(expectedTs, ts);
+}
+
+if (expectedIndexName != null) {
 String data = rs.getString(PhoenixDatabaseMetaData.TASK_DATA);
-assertEquals(expectedData, data);
+assertEquals(true, data.contains("\"IndexName\":\"" + 
expectedIndexName));
 }
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
index c4bcb30..8d6bb06 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
@@ -27,6 +27,7 @@ imp

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-5348: Fix flaky test: testIndexRebuildTask

2019-08-16 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 77cf9ee  PHOENIX-5348: Fix flaky test: testIndexRebuildTask
77cf9ee is described below

commit 77cf9eeb5a9760d362087f2879514f1516f900ac
Author: Gokcen Iskender 
AuthorDate: Thu Jul 25 11:04:18 2019 -0700

PHOENIX-5348: Fix flaky test: testIndexRebuildTask

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/DropTableWithViewsIT.java  | 33 -
 .../apache/phoenix/end2end/IndexRebuildTaskIT.java | 85 +++---
 .../phoenix/end2end/index/IndexMetadataIT.java |  5 +-
 .../phoenix/coprocessor/TaskRegionObserver.java|  9 ++-
 .../coprocessor/tasks/IndexRebuildTask.java|  4 +-
 .../index/PhoenixIndexImportDirectReducer.java |  7 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 .../java/org/apache/phoenix/schema/task/Task.java  | 19 -
 8 files changed, 103 insertions(+), 61 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index 5836c56..6663dde 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.util.Arrays;
 import java.util.Collection;
 
@@ -137,7 +139,8 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 task.run();
 task.run();
 
-assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS, null);
+assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS,
+null, null, null, null, null);
 
 // Views should be dropped by now
 TableName linkTable = 
TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES);
@@ -156,7 +159,9 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 }
 }
 
-public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType, String expectedData)
+public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType,
+String expectedTableName, String expectedTenantId, String 
expectedSchema, Timestamp expectedTs,
+String expectedIndexName)
 throws SQLException {
 ResultSet rs = conn.createStatement().executeQuery("SELECT * " +
 " FROM " + PhoenixDatabaseMetaData.SYSTEM_TASK_NAME +
@@ -166,9 +171,29 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 String taskStatus = rs.getString(PhoenixDatabaseMetaData.TASK_STATUS);
 assertEquals(expectedStatus, taskStatus);
 
-if (expectedData != null) {
+if (expectedTableName != null) {
+String tableName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+assertEquals(expectedTableName, tableName);
+}
+
+if (expectedTenantId != null) {
+String tenantId = rs.getString(PhoenixDatabaseMetaData.TENANT_ID);
+assertEquals(expectedTenantId, tenantId);
+}
+
+if (expectedSchema != null) {
+String schema = rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+assertEquals(expectedSchema, schema);
+}
+
+if (expectedTs != null) {
+Timestamp ts = rs.getTimestamp(PhoenixDatabaseMetaData.TASK_TS);
+assertEquals(expectedTs, ts);
+}
+
+if (expectedIndexName != null) {
 String data = rs.getString(PhoenixDatabaseMetaData.TASK_DATA);
-assertEquals(expectedData, data);
+assertEquals(true, data.contains("\"IndexName\":\"" + 
expectedIndexName));
 }
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
index c4bcb30..8d6bb06 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
@@ -27,6 +27,7 @@ imp

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-5348: Fix flaky test: testIndexRebuildTask

2019-08-16 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 6a2165c  PHOENIX-5348: Fix flaky test: testIndexRebuildTask
6a2165c is described below

commit 6a2165c05ea32b9625fe0d9bdbb2c77d661b1953
Author: Gokcen Iskender 
AuthorDate: Thu Jul 25 11:04:18 2019 -0700

PHOENIX-5348: Fix flaky test: testIndexRebuildTask

Signed-off-by: Chinmay Kulkarni 
---
 .../phoenix/end2end/DropTableWithViewsIT.java  | 33 -
 .../apache/phoenix/end2end/IndexRebuildTaskIT.java | 85 +++---
 .../phoenix/end2end/index/IndexMetadataIT.java |  5 +-
 .../phoenix/coprocessor/TaskRegionObserver.java|  9 ++-
 .../coprocessor/tasks/IndexRebuildTask.java|  4 +-
 .../index/PhoenixIndexImportDirectReducer.java |  7 +-
 .../org/apache/phoenix/schema/MetaDataClient.java  |  2 +-
 .../java/org/apache/phoenix/schema/task/Task.java  | 19 -
 8 files changed, 103 insertions(+), 61 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
index 5836c56..6663dde 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/DropTableWithViewsIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.TABLE_NAME;
 import static org.apache.phoenix.util.PhoenixRuntime.TENANT_ID_ATTRIB;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertTrue;
@@ -26,6 +27,7 @@ import java.sql.Connection;
 import java.sql.DriverManager;
 import java.sql.ResultSet;
 import java.sql.SQLException;
+import java.sql.Timestamp;
 import java.util.Arrays;
 import java.util.Collection;
 
@@ -137,7 +139,8 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 task.run();
 task.run();
 
-assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS, null);
+assertTaskColumns(conn, PTable.TaskStatus.COMPLETED.toString(), 
PTable.TaskType.DROP_CHILD_VIEWS,
+null, null, null, null, null);
 
 // Views should be dropped by now
 TableName linkTable = 
TableName.valueOf(PhoenixDatabaseMetaData.SYSTEM_CHILD_LINK_NAME_BYTES);
@@ -156,7 +159,9 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 }
 }
 
-public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType, String expectedData)
+public static void assertTaskColumns(Connection conn, String 
expectedStatus, PTable.TaskType taskType,
+String expectedTableName, String expectedTenantId, String 
expectedSchema, Timestamp expectedTs,
+String expectedIndexName)
 throws SQLException {
 ResultSet rs = conn.createStatement().executeQuery("SELECT * " +
 " FROM " + PhoenixDatabaseMetaData.SYSTEM_TASK_NAME +
@@ -166,9 +171,29 @@ public class DropTableWithViewsIT extends 
SplitSystemCatalogIT {
 String taskStatus = rs.getString(PhoenixDatabaseMetaData.TASK_STATUS);
 assertEquals(expectedStatus, taskStatus);
 
-if (expectedData != null) {
+if (expectedTableName != null) {
+String tableName = 
rs.getString(PhoenixDatabaseMetaData.TABLE_NAME);
+assertEquals(expectedTableName, tableName);
+}
+
+if (expectedTenantId != null) {
+String tenantId = rs.getString(PhoenixDatabaseMetaData.TENANT_ID);
+assertEquals(expectedTenantId, tenantId);
+}
+
+if (expectedSchema != null) {
+String schema = rs.getString(PhoenixDatabaseMetaData.TABLE_SCHEM);
+assertEquals(expectedSchema, schema);
+}
+
+if (expectedTs != null) {
+Timestamp ts = rs.getTimestamp(PhoenixDatabaseMetaData.TASK_TS);
+assertEquals(expectedTs, ts);
+}
+
+if (expectedIndexName != null) {
 String data = rs.getString(PhoenixDatabaseMetaData.TASK_DATA);
-assertEquals(expectedData, data);
+assertEquals(true, data.contains("\"IndexName\":\"" + 
expectedIndexName));
 }
 }
 }
diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
index c4bcb30..8d6bb06 100644
--- 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
+++ 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexRebuildTaskIT.java
@@ -27,6 +27,7 @@ imp

svn commit: r1865955 - in /phoenix/site: publish/language/datatypes.html publish/language/functions.html publish/language/index.html publish/team.html source/src/site/markdown/team.md

2019-08-26 Thread chinmayskulkarni
Author: chinmayskulkarni
Date: Mon Aug 26 21:23:49 2019
New Revision: 1865955

URL: http://svn.apache.org/viewvc?rev=1865955&view=rev
Log:
Update Chinmay Kulkarni's role to PMC

Modified:
phoenix/site/publish/language/datatypes.html
phoenix/site/publish/language/functions.html
phoenix/site/publish/language/index.html
phoenix/site/publish/team.html
phoenix/site/source/src/site/markdown/team.md

Modified: phoenix/site/publish/language/datatypes.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/datatypes.html?rev=1865955&r1=1865954&r2=1865955&view=diff
==
--- phoenix/site/publish/language/datatypes.html (original)
+++ phoenix/site/publish/language/datatypes.html Mon Aug 26 21:23:49 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/language/functions.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/functions.html?rev=1865955&r1=1865954&r2=1865955&view=diff
==
--- phoenix/site/publish/language/functions.html (original)
+++ phoenix/site/publish/language/functions.html Mon Aug 26 21:23:49 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/language/index.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/language/index.html?rev=1865955&r1=1865954&r2=1865955&view=diff
==
--- phoenix/site/publish/language/index.html (original)
+++ phoenix/site/publish/language/index.html Mon Aug 26 21:23:49 2019
@@ -1,7 +1,7 @@
 
 
 
 

Modified: phoenix/site/publish/team.html
URL: 
http://svn.apache.org/viewvc/phoenix/site/publish/team.html?rev=1865955&r1=1865954&r2=1865955&view=diff
==
--- phoenix/site/publish/team.html (original)
+++ phoenix/site/publish/team.html Mon Aug 26 21:23:49 2019
@@ -1,7 +1,7 @@
 
 
 
 
@@ -203,179 +203,179 @@
PMC 


+   Chinmay Kulkarni  
+   Salesforce  
+   mailto:chinmayskulka...@apache.org";>chinmayskulka...@apache.org  
+   PMC 
+   
+   
Devaraj Das  
Hortonworks  
mailto:d...@apache.org";>d...@apache.org  
PMC 

-   
+   
Eli Levine  
Salesforce  
mailto:elilev...@apache.org";>elilev...@apache.org  
PMC 

-   
+   
Enis Soztutar  
Hortonworks  
mailto:e...@apache.org";>e...@apache.org  
PMC 

-   
+   
Gabriel Reid  
NGDATA  
mailto:gr...@apache.org";>gr...@apache.org  
PMC 

-   
+   
Geoffrey Jacoby  
Salesforce  
mailto:gjac...@apache.org";>gjac...@apache.org  
PMC 

-   
+   
James Taylor  
Lyft  
mailto:jamestay...@apache.org";>jamestay...@apache.org  
PMC 

-   
+   
Jeffrey Zhong  
Elementum  
mailto:jeffr...@apache.org";>jeffr...@apache.org  
PMC 

-   
+   
Jesse Yates  
Tesla  
mailto:jya...@apache.org";>jya...@apache.org  
PMC 

-   
+   
Josh Elser  
Hortonworks  
mailto:els...@apache.org";>els...@apache.org  
PMC 

-   
+   
Josh Mahonin  
Interset  
mailto:jmaho...@apache.org";>jmaho...@apache.org  
PMC 

-   
+   
Karan Mehta  
Salesforce  
mailto:karanmeht...@apache.org";>karanmeht...@apache.org  
PMC 

-   
+   
Lars Hofhansl  
Salesforce  
mailto:la...@apache.org";>la...@apache.org  
PMC 

-   
+   
Maryann Xue  
Databricks  
mailto:maryann...@apache.org";>maryann...@apache.org  
PMC 

-   
+   
Michael Stack  
Cloudera  
mailto:st...@apache.org";>st...@apache.org  
PMC 

-   
+   
Mujtaba Chohan  
Salesforce  
mailto:mujt...@apache.org";>mujt...@apache.org  
PMC 

-   
+   
Nick Dimiduk  
Icebrg  
mailto:ndimi...@apache.org";>ndimi...@apache.org  
PMC 

-   
+   
Pedro Boado  
Datadog  
mailto:pbo...@apache.org";>pbo...@apache.org  
PMC 

-   
+   
Rajeshbabu Chintaguntla  
Hortonworks  
mailto:rajeshb...@apache.org";>rajeshb...@apache.org  
PMC 

-   
+   
Ramkrishna Vasudevan  
Intel  
mailto:ramkris...@apache.org";>ramkris...@apache.org  
PMC 

-   
+   
Ravi Magham  
Elementum  
mailto:ravimag...@apache.org";>ravimag...@apache.org  
PMC 

-   
+   
Samarth Jain  
Netflix  
mailto:sama...@apache.org";>sama...@apache.org  
PMC 

-   
+   
Sergey Soldatov  
Hortonworks  
mailto:s...@apache.org";>s...@apache.org  
PMC 

-   
+   
Simon Toens  
Salesforce  
mailto:sto...@apache.org";>sto.

[phoenix] branch master updated: PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify HBase metadata if failed

2019-08-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/master by this push:
 new 197b6e3  PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should 
not modify HBase metadata if failed
197b6e3 is described below

commit 197b6e30c894b657758c5d0cb3c6182d6c8d4723
Author: Sandeep Pal 
AuthorDate: Sat Aug 24 13:45:26 2019 -0700

PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify 
HBase metadata if failed

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/end2end/AlterTableIT.java   | 32 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java | 10 +--
 2 files changed, 39 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 7912c58..c2c02de 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -17,6 +17,7 @@
  */
 package org.apache.phoenix.end2end;
 
+import static 
org.apache.phoenix.exception.SQLExceptionCode.CANNOT_MUTATE_TABLE;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_FAMILY;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_NAME;
 import static org.apache.phoenix.jdbc.PhoenixDatabaseMetaData.COLUMN_QUALIFIER;
@@ -51,6 +52,7 @@ import org.apache.hadoop.hbase.client.Admin;
 import org.apache.hadoop.hbase.client.ColumnFamilyDescriptor;
 import org.apache.hadoop.hbase.client.TableDescriptor;
 import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.exception.PhoenixParserException;
 import org.apache.phoenix.exception.SQLExceptionCode;
 import org.apache.phoenix.jdbc.PhoenixConnection;
 import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData;
@@ -852,6 +854,36 @@ public class AlterTableIT extends ParallelStatsDisabledIT {
 conn1.close();
 }
 
+@Test
+public void testAlterTableOnGlobalIndex() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ Statement stmt = conn.createStatement()) {
+conn.setAutoCommit(false);
+Admin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+String tableName = generateUniqueName();
+String globalIndexTableName = generateUniqueName();
+
+stmt.execute("CREATE TABLE " + tableName +
+" (ID INTEGER PRIMARY KEY, COL1 VARCHAR(10), COL2 BOOLEAN)");
+
+stmt.execute("CREATE INDEX " + globalIndexTableName + " on " + 
tableName + " (COL2)");
+TableDescriptor originalDesc = 
admin.getDescriptor(TableName.valueOf(globalIndexTableName));
+int expectedErrorCode = 0;
+try {
+stmt.execute("ALTER TABLE " + globalIndexTableName + " ADD 
CF1.AGE INTEGER ");
+conn.commit();
+fail("The alter table did not fail as expected");
+} catch (SQLException e) {
+assertEquals(e.getErrorCode(), 
CANNOT_MUTATE_TABLE.getErrorCode());
+}
+
+TableDescriptor finalDesc = 
admin.getDescriptor(TableName.valueOf(globalIndexTableName));
+assertTrue(finalDesc.equals(originalDesc));
+
+// drop the table
+stmt.execute("DROP TABLE " + tableName);
+}
+}
 
 @Test
 public void testAlterStoreNulls() throws SQLException {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index 4112984..e5c935d 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2060,9 +2060,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // When adding a column to a view, base physical table should 
only be modified when new column families are being added.
 modifyHTable = canViewsAddNewCF && 
!existingColumnFamiliesForBaseTable(table.getPhysicalName()).containsAll(colFamiliesForPColumnsToBeAdded);
 }
-if (modifyHTable) {
-sendHBaseMetaData(tableDescriptors, pollingNeeded);
-}
 
 // Special case for call during drop table to ensure that the 
empty column family exists.
 // In this, case we only include the table header row, as until we 
add schemaBytes and tableBytes
@@ -2070,6 +2067,9 @@ public class Connec

[phoenix] branch 4.x-HBase-1.3 updated: PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify HBase metadata if failed

2019-08-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.3
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.3 by this push:
 new 9298bbb  PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should 
not modify HBase metadata if failed
9298bbb is described below

commit 92988fe20d8aedf031a0ed6219265d737afc
Author: Sandeep Pal 
AuthorDate: Mon Aug 26 14:54:33 2019 -0700

PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify 
HBase metadata if failed

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/end2end/AlterTableIT.java   | 30 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  9 ---
 2 files changed, 36 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 163be71..ef08fea 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -851,6 +851,36 @@ public class AlterTableIT extends ParallelStatsDisabledIT {
 conn1.close();
 }
 
+@Test
+public void testAlterTableOnGlobalIndex() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+ Statement stmt = conn.createStatement()) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String globalIndexTableName = generateUniqueName();
+
+stmt.execute("CREATE TABLE " + tableName +
+" (ID INTEGER PRIMARY KEY, COL1 VARCHAR(10), COL2 BOOLEAN)");
+
+stmt.execute("CREATE INDEX " + globalIndexTableName + " on " + 
tableName + " (COL2)");
+HTableDescriptor originalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+
+try {
+stmt.execute("ALTER TABLE " + globalIndexTableName + " ADD 
CF1.AGE INTEGER ");
+conn.commit();
+fail("The alter table did not fail as expected");
+} catch (SQLException e) {
+assertEquals(e.getErrorCode(), 
SQLExceptionCode.CANNOT_MUTATE_TABLE.getErrorCode());
+}
+
+HTableDescriptor finalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+assertTrue(finalDesc.equals(originalDesc));
+
+// drop the table
+stmt.execute("DROP TABLE " + tableName);
+}
+}
 
 @Test
 public void testAlterStoreNulls() throws SQLException {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index d5a08bc..fd9092f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2042,9 +2042,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // When adding a column to a view, base physical table should 
only be modified when new column families are being added.
 modifyHTable = canViewsAddNewCF && 
!existingColumnFamiliesForBaseTable(table.getPhysicalName()).containsAll(colFamiliesForPColumnsToBeAdded);
 }
-if (modifyHTable) {
-sendHBaseMetaData(tableDescriptors, pollingNeeded);
-}
 
 // Special case for call during drop table to ensure that the 
empty column family exists.
 // In this, case we only include the table header row, as until we 
add schemaBytes and tableBytes
@@ -2052,6 +2049,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // TODO: change to  if (tableMetaData.isEmpty()) once we pass 
through schemaBytes and tableBytes
 // Also, could be used to update property values on ALTER TABLE t 
SET prop=xxx
 if ((tableMetaData.isEmpty()) || (tableMetaData.size() == 1 && 
tableMetaData.get(0).isEmpty())) {
+if (modifyHTable) {
+sendHBaseMetaData(tableDescriptors, pollingNeeded);
+}
 return new MetaDataMutationResult(MutationCode.NO_OP, 
EnvironmentEdgeManager.currentTimeMillis(), table);
 }
 byte[][] rowKeyMetaData = new byte[3][];
@@ -2109,6 +2109,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
  

[phoenix] branch 4.x-HBase-1.4 updated: PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify HBase metadata if failed

2019-08-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.4
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.4 by this push:
 new 034434a  PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should 
not modify HBase metadata if failed
034434a is described below

commit 034434a99e24bc8eea4a42fd3f8c25cbea800ca0
Author: Sandeep Pal 
AuthorDate: Mon Aug 26 14:54:33 2019 -0700

PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify 
HBase metadata if failed

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/end2end/AlterTableIT.java   | 30 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  9 ---
 2 files changed, 36 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 163be71..ef08fea 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -851,6 +851,36 @@ public class AlterTableIT extends ParallelStatsDisabledIT {
 conn1.close();
 }
 
+@Test
+public void testAlterTableOnGlobalIndex() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+ Statement stmt = conn.createStatement()) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String globalIndexTableName = generateUniqueName();
+
+stmt.execute("CREATE TABLE " + tableName +
+" (ID INTEGER PRIMARY KEY, COL1 VARCHAR(10), COL2 BOOLEAN)");
+
+stmt.execute("CREATE INDEX " + globalIndexTableName + " on " + 
tableName + " (COL2)");
+HTableDescriptor originalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+
+try {
+stmt.execute("ALTER TABLE " + globalIndexTableName + " ADD 
CF1.AGE INTEGER ");
+conn.commit();
+fail("The alter table did not fail as expected");
+} catch (SQLException e) {
+assertEquals(e.getErrorCode(), 
SQLExceptionCode.CANNOT_MUTATE_TABLE.getErrorCode());
+}
+
+HTableDescriptor finalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+assertTrue(finalDesc.equals(originalDesc));
+
+// drop the table
+stmt.execute("DROP TABLE " + tableName);
+}
+}
 
 @Test
 public void testAlterStoreNulls() throws SQLException {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index d5a08bc..fd9092f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2042,9 +2042,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // When adding a column to a view, base physical table should 
only be modified when new column families are being added.
 modifyHTable = canViewsAddNewCF && 
!existingColumnFamiliesForBaseTable(table.getPhysicalName()).containsAll(colFamiliesForPColumnsToBeAdded);
 }
-if (modifyHTable) {
-sendHBaseMetaData(tableDescriptors, pollingNeeded);
-}
 
 // Special case for call during drop table to ensure that the 
empty column family exists.
 // In this, case we only include the table header row, as until we 
add schemaBytes and tableBytes
@@ -2052,6 +2049,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // TODO: change to  if (tableMetaData.isEmpty()) once we pass 
through schemaBytes and tableBytes
 // Also, could be used to update property values on ALTER TABLE t 
SET prop=xxx
 if ((tableMetaData.isEmpty()) || (tableMetaData.size() == 1 && 
tableMetaData.get(0).isEmpty())) {
+if (modifyHTable) {
+sendHBaseMetaData(tableDescriptors, pollingNeeded);
+}
 return new MetaDataMutationResult(MutationCode.NO_OP, 
EnvironmentEdgeManager.currentTimeMillis(), table);
 }
 byte[][] rowKeyMetaData = new byte[3][];
@@ -2109,6 +2109,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
  

[phoenix] branch 4.x-HBase-1.5 updated: PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify HBase metadata if failed

2019-08-26 Thread chinmayskulkarni
This is an automated email from the ASF dual-hosted git repository.

chinmayskulkarni pushed a commit to branch 4.x-HBase-1.5
in repository https://gitbox.apache.org/repos/asf/phoenix.git


The following commit(s) were added to refs/heads/4.x-HBase-1.5 by this push:
 new 250bf57  PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should 
not modify HBase metadata if failed
250bf57 is described below

commit 250bf57a1a59676ca674893e144eac3a8800a548
Author: Sandeep Pal 
AuthorDate: Mon Aug 26 14:54:33 2019 -0700

PHOENIX-4743: ALTER TABLE ADD COLUMN for global index should not modify 
HBase metadata if failed

Signed-off-by: Chinmay Kulkarni 
---
 .../org/apache/phoenix/end2end/AlterTableIT.java   | 30 ++
 .../phoenix/query/ConnectionQueryServicesImpl.java |  9 ---
 2 files changed, 36 insertions(+), 3 deletions(-)

diff --git 
a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java 
b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
index 163be71..ef08fea 100644
--- a/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
+++ b/phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java
@@ -851,6 +851,36 @@ public class AlterTableIT extends ParallelStatsDisabledIT {
 conn1.close();
 }
 
+@Test
+public void testAlterTableOnGlobalIndex() throws Exception {
+try (Connection conn = DriverManager.getConnection(getUrl());
+ HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();
+ Statement stmt = conn.createStatement()) {
+conn.setAutoCommit(false);
+String tableName = generateUniqueName();
+String globalIndexTableName = generateUniqueName();
+
+stmt.execute("CREATE TABLE " + tableName +
+" (ID INTEGER PRIMARY KEY, COL1 VARCHAR(10), COL2 BOOLEAN)");
+
+stmt.execute("CREATE INDEX " + globalIndexTableName + " on " + 
tableName + " (COL2)");
+HTableDescriptor originalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+
+try {
+stmt.execute("ALTER TABLE " + globalIndexTableName + " ADD 
CF1.AGE INTEGER ");
+conn.commit();
+fail("The alter table did not fail as expected");
+} catch (SQLException e) {
+assertEquals(e.getErrorCode(), 
SQLExceptionCode.CANNOT_MUTATE_TABLE.getErrorCode());
+}
+
+HTableDescriptor finalDesc = 
admin.getTableDescriptor(Bytes.toBytes(globalIndexTableName));
+assertTrue(finalDesc.equals(originalDesc));
+
+// drop the table
+stmt.execute("DROP TABLE " + tableName);
+}
+}
 
 @Test
 public void testAlterStoreNulls() throws SQLException {
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
index d5a08bc..fd9092f 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
@@ -2042,9 +2042,6 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // When adding a column to a view, base physical table should 
only be modified when new column families are being added.
 modifyHTable = canViewsAddNewCF && 
!existingColumnFamiliesForBaseTable(table.getPhysicalName()).containsAll(colFamiliesForPColumnsToBeAdded);
 }
-if (modifyHTable) {
-sendHBaseMetaData(tableDescriptors, pollingNeeded);
-}
 
 // Special case for call during drop table to ensure that the 
empty column family exists.
 // In this, case we only include the table header row, as until we 
add schemaBytes and tableBytes
@@ -2052,6 +2049,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
 // TODO: change to  if (tableMetaData.isEmpty()) once we pass 
through schemaBytes and tableBytes
 // Also, could be used to update property values on ALTER TABLE t 
SET prop=xxx
 if ((tableMetaData.isEmpty()) || (tableMetaData.size() == 1 && 
tableMetaData.get(0).isEmpty())) {
+if (modifyHTable) {
+sendHBaseMetaData(tableDescriptors, pollingNeeded);
+}
 return new MetaDataMutationResult(MutationCode.NO_OP, 
EnvironmentEdgeManager.currentTimeMillis(), table);
 }
 byte[][] rowKeyMetaData = new byte[3][];
@@ -2109,6 +2109,9 @@ public class ConnectionQueryServicesImpl extends 
DelegateQueryServices implement
  

  1   2   3   4   >