[jira] [Resolved] (PHOENIX-2820) Investigate why SortMergeJoinIT has a sort in the explain plan

2016-04-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-2820.
---
Resolution: Fixed

> Investigate why SortMergeJoinIT has a sort in the explain plan 
> ---
>
> Key: PHOENIX-2820
> URL: https://issues.apache.org/jira/browse/PHOENIX-2820
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2820.patch
>
>
> After removing an unnecessary merge sort for aggregate queries in this[1] 
> commit, a sort started appearing in one of the explain plans for 
> SortMergeJoinIT. This sort is not required, so we need to figure out why it 
> is part of the explain plan now.
> [1]https://github.com/apache/phoenix/commit/15766625ab9a132f8d8ac625b026a5c56e62e879,
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PHOENIX-2820) Investigate why SortMergeJoinIT has a sort in the explain plan

2016-04-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-2820:
-

Assignee: James Taylor

> Investigate why SortMergeJoinIT has a sort in the explain plan 
> ---
>
> Key: PHOENIX-2820
> URL: https://issues.apache.org/jira/browse/PHOENIX-2820
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2820.patch
>
>
> After removing an unnecessary merge sort for aggregate queries in this[1] 
> commit, a sort started appearing in one of the explain plans for 
> SortMergeJoinIT. This sort is not required, so we need to figure out why it 
> is part of the explain plan now.
> [1]https://github.com/apache/phoenix/commit/15766625ab9a132f8d8ac625b026a5c56e62e879,
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1478) Can't upsert value of 127 into column of type unsigned tinyint

2016-04-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231603#comment-15231603
 ] 

James Taylor commented on PHOENIX-1478:
---

I've added Biju as a contributor and assigned this JIRA to him. FYI, you're a 
JIRA admin for Phoenix, so you can do everything I can, [~rajeshbabu].

> Can't upsert value of 127 into column of type unsigned tinyint
> --
>
> Key: PHOENIX-1478
> URL: https://issues.apache.org/jira/browse/PHOENIX-1478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Carter Shanklin
>Assignee: Biju Nair
>Priority: Minor
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1478-1.patch, PHOENIX-1478.patch
>
>
> The docs say values from 0 to 127 are valid. From sqlline I can upsert a 
> value of 126 but not 127. See below.
> {code}
> $ cat UnsignedTinyintFail.sql
> drop table if exists unsigned_tinyint_test;
> create table unsigned_tinyint_test (uti unsigned_tinyint primary key);
> upsert into unsigned_tinyint_test values (126);
> upsert into unsigned_tinyint_test values (127);
> {code}
> Results in:
> {code}
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> Setting property: [run, UnsignedTinyintFail.sql]
> issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecure
> 14/11/15 08:19:57 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> Connected to: Phoenix (version 4.2)
> Driver: PhoenixEmbeddedDriver (version 4.2)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 76/76 (100%) Done
> Done
> 1/4  drop table if exists unsigned_tinyint_test;
> No rows affected (0.015 seconds)
> 2/4  create table unsigned_tinyint_test (uti unsigned_tinyint primary 
> key);
> No rows affected (0.317 seconds)
> 3/4  upsert into unsigned_tinyint_test values (126);
> 1 row affected (0.032 seconds)
> 4/4  upsert into unsigned_tinyint_test values (127);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
>   at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:52)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:136)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:854)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:830)
>   at 
> org.apache.phoenix.parse.LiteralParseNode.accept(LiteralParseNode.java:73)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:721)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:458)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:259)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>   at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
>   at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>   at sqlline.SqlLine.dispatch(SqlLine.java:821)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1793)
>   at sqlline.SqlLine$Commands.run(SqlLine.java:4161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>   at sqlline.SqlLine.dispatch(SqlLine.java:817)
>   at sqlline.SqlLine.initArgs(SqlLine.java:657)
>   at sqlline.SqlLine.begin(SqlLine.java:680)
>   at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>   at 

[jira] [Updated] (PHOENIX-1478) Can't upsert value of 127 into column of type unsigned tinyint

2016-04-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1478:
--
Assignee: Biju Nair

> Can't upsert value of 127 into column of type unsigned tinyint
> --
>
> Key: PHOENIX-1478
> URL: https://issues.apache.org/jira/browse/PHOENIX-1478
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Carter Shanklin
>Assignee: Biju Nair
>Priority: Minor
>  Labels: verify
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1478-1.patch, PHOENIX-1478.patch
>
>
> The docs say values from 0 to 127 are valid. From sqlline I can upsert a 
> value of 126 but not 127. See below.
> {code}
> $ cat UnsignedTinyintFail.sql
> drop table if exists unsigned_tinyint_test;
> create table unsigned_tinyint_test (uti unsigned_tinyint primary key);
> upsert into unsigned_tinyint_test values (126);
> upsert into unsigned_tinyint_test values (127);
> {code}
> Results in:
> {code}
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> Setting property: [run, UnsignedTinyintFail.sql]
> issuing: !connect jdbc:phoenix:localhost:2181:/hbase-unsecure none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:localhost:2181:/hbase-unsecure
> 14/11/15 08:19:57 WARN impl.MetricsConfig: Cannot locate configuration: tried 
> hadoop-metrics2-phoenix.properties,hadoop-metrics2.properties
> Connected to: Phoenix (version 4.2)
> Driver: PhoenixEmbeddedDriver (version 4.2)
> Autocommit status: true
> Transaction isolation: TRANSACTION_READ_COMMITTED
> Building list of tables and columns for tab-completion (set fastconnect to 
> true to skip)...
> 76/76 (100%) Done
> Done
> 1/4  drop table if exists unsigned_tinyint_test;
> No rows affected (0.015 seconds)
> 2/4  create table unsigned_tinyint_test (uti unsigned_tinyint primary 
> key);
> No rows affected (0.317 seconds)
> 3/4  upsert into unsigned_tinyint_test values (126);
> 1 row affected (0.032 seconds)
> 4/4  upsert into unsigned_tinyint_test values (127);
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_TINYINT and INTEGER for 127 
> (state=22005,code=203)
> org.apache.phoenix.schema.TypeMismatchException: ERROR 203 (22005): Type 
> mismatch. UNSIGNED_TINYINT and INTEGER for 127
>   at 
> org.apache.phoenix.schema.TypeMismatchException.newException(TypeMismatchException.java:52)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:160)
>   at 
> org.apache.phoenix.expression.LiteralExpression.newConstant(LiteralExpression.java:136)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:854)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesCompiler.visit(UpsertCompiler.java:830)
>   at 
> org.apache.phoenix.parse.LiteralParseNode.accept(LiteralParseNode.java:73)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:721)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:467)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableUpsertStatement.compilePlan(PhoenixStatement.java:458)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:259)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:252)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:250)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1037)
>   at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
>   at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
>   at sqlline.SqlLine.dispatch(SqlLine.java:821)
>   at sqlline.SqlLine.runCommands(SqlLine.java:1793)
>   at sqlline.SqlLine$Commands.run(SqlLine.java:4161)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2810)
>   at sqlline.SqlLine.dispatch(SqlLine.java:817)
>   at sqlline.SqlLine.initArgs(SqlLine.java:657)
>   at sqlline.SqlLine.begin(SqlLine.java:680)
>   at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
>   at sqlline.SqlLine.main(SqlLine.java:424)
> Aborting command set because "force" is false and command failed: "upsert 
> into unsigned_tinyint_test values (127);"
> 

[jira] [Comment Edited] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231586#comment-15231586
 ] 

James Taylor edited comment on PHOENIX-2743 at 4/8/16 4:05 AM:
---

Is this ready to go, [~sergey.soldatov]? How's the test coverage look? I'm not 
seeing any - is that an oversight, [~mini666]. Should we dup out PHOENIX-331 or 
is that complimentary? Mind taking a look at the PR, [~maghamravikiran]?


was (Author: jamestaylor):
Is this ready to go, [~sergey.soldatov]? How's the test coverage look? Should 
we dup out PHOENIX-331 or is that complimentary? Mind taking a look at the PR, 
[~maghamravikiran]?

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2743) HivePhoenixHandler for big-big join with predicate push down

2016-04-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231586#comment-15231586
 ] 

James Taylor commented on PHOENIX-2743:
---

Is this ready to go, [~sergey.soldatov]? How's the test coverage look? Should 
we dup out PHOENIX-331 or is that complimentary? Mind taking a look at the PR, 
[~maghamravikiran]?

> HivePhoenixHandler for big-big join with predicate push down
> 
>
> Key: PHOENIX-2743
> URL: https://issues.apache.org/jira/browse/PHOENIX-2743
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.5.0, 4.6.0
> Environment: hive-1.2.1
>Reporter: JeongMin Ju
>  Labels: features, performance
> Attachments: PHOENIX-2743-1.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Phoenix support hash join & sort-merge join. But in case of big*big join does 
> not process well.
> Therefore Need other method like Hive.
> I implemented hive-phoenix-handler that can access Apache Phoenix table on 
> HBase using HiveQL.
> hive-phoenix-handler is very faster than hive-hbase-handler because of 
> applying predicate push down.
> I am publishing source code to github for contribution and maybe will be 
> completed by next week.
> https://github.com/mini666/hive-phoenix-handler
> please, review my proposal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2820) Investigate why SortMergeJoinIT has a sort in the explain plan

2016-04-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2820:
--
Attachment: PHOENIX-2820.patch

Thanks for the pointer, [~maryannxue] - that saved me some time. Please review.

> Investigate why SortMergeJoinIT has a sort in the explain plan 
> ---
>
> Key: PHOENIX-2820
> URL: https://issues.apache.org/jira/browse/PHOENIX-2820
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2820.patch
>
>
> After removing an unnecessary merge sort for aggregate queries in this[1] 
> commit, a sort started appearing in one of the explain plans for 
> SortMergeJoinIT. This sort is not required, so we need to figure out why it 
> is part of the explain plan now.
> [1]https://github.com/apache/phoenix/commit/15766625ab9a132f8d8ac625b026a5c56e62e879,
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2826) Remove test only indexing code

2016-04-07 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-2826:
-

 Summary: Remove test only indexing code
 Key: PHOENIX-2826
 URL: https://issues.apache.org/jira/browse/PHOENIX-2826
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


We have a lot of classes added in the 
org.apache.phoenix.hbase.index.covered.example package that are used only in 
the test code. We should get rid of them as the inherent assumption they are 
built on doesn't hold once we have encoded column qualifiers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2825) Add an add-ons page to website

2016-04-07 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-2825:
--
Assignee: Mujtaba Chohan

> Add an add-ons page to website
> --
>
> Key: PHOENIX-2825
> URL: https://issues.apache.org/jira/browse/PHOENIX-2825
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Mujtaba Chohan
>
> We should add an add-ons page to our website that points to external 
> contributions such as:
> * Phoenix ORM layer by eHarmony: https://github.com/eHarmony/pho
> * Phoenix Python integration: https://code.oxygene.sk/lukas/python-phoenixdb



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2825) Add an add-ons page to website

2016-04-07 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2825:
-

 Summary: Add an add-ons page to website
 Key: PHOENIX-2825
 URL: https://issues.apache.org/jira/browse/PHOENIX-2825
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


We should add an add-ons page to our website that points to external 
contributions such as:
* Phoenix ORM layer by eHarmony: https://github.com/eHarmony/pho
* Phoenix Python integration: https://code.oxygene.sk/lukas/python-phoenixdb



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-04-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15231328#comment-15231328
 ] 

Hadoop QA commented on PHOENIX-2535:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12797621/PHOENIX-2535-5.patch
  against master branch at commit 1e47821876af8100d5b4bc4dad03168eca7f5652.
  ATTACHMENT ID: 12797621

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 javac{color}.  The applied patch generated 247 javac compiler 
warnings (more than the master's current 81 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
25 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+  
+
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+  
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+  
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">
+  
implementation="org.apache.maven.plugins.shade.resource.IncludeResourceTransformer">

{color:green}+1 core tests{color}.  The patch passed unit tests in .

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s): 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/296//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/296//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/296//console

This message is automatically generated.

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2535) Create shaded clients (thin + thick)

2016-04-07 Thread Sergey Soldatov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated PHOENIX-2535:
-
Attachment: PHOENIX-2535-5.patch

Changes:
* added module phoenix-client
* due the bug in shading plugin (all resource transformers ignore 
include/exclude for transorms. So even if we have exclude 
{{org.apache.hadoop.**}} for {{org}} relocations, service file will have shaded 
names) all relocations are done separately. 
* {{org.eclipse}} is not shaded anymore since it's required for query server.
* tephra is not shaded 
* phoenix-server.jar is located in the {{phoenix-server/target}} now. I'm not 
sure whether we need to create a separate module for that. If so, need a name 
for that . 
* phoenix-assemble now has only tarball 

Tested: sqlline, squirrel,  tephra server, query server

> Create shaded clients (thin + thick) 
> -
>
> Key: PHOENIX-2535
> URL: https://issues.apache.org/jira/browse/PHOENIX-2535
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Sergey Soldatov
> Fix For: 4.8.0
>
> Attachments: PHOENIX-2535-1.patch, PHOENIX-2535-2.patch, 
> PHOENIX-2535-3.patch, PHOENIX-2535-4.patch, PHOENIX-2535-5.patch
>
>
> Having shaded client artifacts helps greatly in minimizing the dependency 
> conflicts at the run time. We are seeing more of Phoenix JDBC client being 
> used in Storm topologies and other settings where guava versions become a 
> problem. 
> I think we can do a parallel artifact for the thick client with shaded 
> dependencies and also using shaded hbase. For thin client, maybe shading 
> should be the default since it is new? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2824) PhoenixTransactionalIndexer rollback doesn't work correctly

2016-04-07 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-2824:
-

 Summary: PhoenixTransactionalIndexer rollback doesn't work 
correctly
 Key: PHOENIX-2824
 URL: https://issues.apache.org/jira/browse/PHOENIX-2824
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: James Taylor


Looking at this piece of code in PhoenixTransactionalIndexer in 
processRollback(), something isn't right:

{code}
do {
Cell cell = cells.get(i);
hasPuts |= cell.getTypeByte() == 
KeyValue.Type.Put.getCode();
writePtr = cell.getTimestamp();
do {
// Add at the beginning of the list to match the 
expected HBase
// newest to oldest sort order (which TxTableState 
relies on
// with the Result.getLatestColumnValue() calls).
singleTimeCells.addFirst(cell);
} while (++i < nCells && cells.get(i).getTimestamp() == 
writePtr);
} while (i < nCells && cells.get(i).getTimestamp() <= 
readPtr);

{code}

The cell variable isn't being reset in the inner loop even though index i has 
been incremented. As a result we always end up adding cells.get(0) to 
singleTimeCells list. However, simply doing this:

{code} 
while (++i < nCells && (cell = cells.get(i)).getTimestamp() == writePtr)
{code}

unfortunately doesn't work. I see test failures in MutableRollbackIT after the 
above change.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2722) support mysql "limit,offset" clauses

2016-04-07 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230971#comment-15230971
 ] 

James Taylor commented on PHOENIX-2722:
---

+1. Thanks for the excellent work, [~ankit.singhal] and for diligently working 
through all the review comments.

> support mysql "limit,offset" clauses 
> -
>
> Key: PHOENIX-2722
> URL: https://issues.apache.org/jira/browse/PHOENIX-2722
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Attachments: PHOENIX-2722.patch, PHOENIX-2722_formatted.patch, 
> PHOENIX-2722_v1_rebased.patch
>
>
> For serial query(query with “serial" hint or  “limit" without "order by”), we 
> can limit each scan(using page filter) to “limit+offset” instead of limit 
> earlier.
> And then, for all queries, we can forward the relevant client iterators to 
> the offset provided and then return the result.
> syntax
> {code}
> [ LIMIT { count } ]
> [ OFFSET start [ ROW | ROWS ] ]
> [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
> {code}
> Some new keywords(OFFSET,FETCH,ROW, ROWS,ONLY) are getting introduced so 
> users might need to see that they are not using them as column name or 
> something.
> WDYT, [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1311 HBase namespaces surfaced in ph...

2016-04-07 Thread ankitsinghal
Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-207058780
  
@samarthjain , any more review comments?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1311) HBase namespaces surfaced in phoenix

2016-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230912#comment-15230912
 ] 

ASF GitHub Bot commented on PHOENIX-1311:
-

Github user ankitsinghal commented on the pull request:

https://github.com/apache/phoenix/pull/153#issuecomment-207058780
  
@samarthjain , any more review comments?


> HBase namespaces surfaced in phoenix
> 
>
> Key: PHOENIX-1311
> URL: https://issues.apache.org/jira/browse/PHOENIX-1311
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: nicolas maillard
>Assignee: Ankit Singhal
>Priority: Minor
> Fix For: 4.8.0
>
> Attachments: PHOENIX-1311.docx, PHOENIX-1311_v1.patch, 
> PHOENIX-1311_v2.patch, PHOENIX-1311_wip.patch, PHOENIX-1311_wip_2.patch
>
>
> Hbase (HBASE-8015) has the concept of namespaces in the form of 
> myNamespace:MyTable it would be great if Phoenix leveraged this feature to 
> give a database like feature on top of the table.
> Maybe to stay close to Hbase it could also be a create DB:Table...
> or DB.Table which is a more standard annotation?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2722) support mysql "limit,offset" clauses

2016-04-07 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-2722:
---
Attachment: PHOENIX-2722_v1_rebased.patch

[~jamestaylor], is it ready for commit now?

> support mysql "limit,offset" clauses 
> -
>
> Key: PHOENIX-2722
> URL: https://issues.apache.org/jira/browse/PHOENIX-2722
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Minor
> Attachments: PHOENIX-2722.patch, PHOENIX-2722_formatted.patch, 
> PHOENIX-2722_v1_rebased.patch
>
>
> For serial query(query with “serial" hint or  “limit" without "order by”), we 
> can limit each scan(using page filter) to “limit+offset” instead of limit 
> earlier.
> And then, for all queries, we can forward the relevant client iterators to 
> the offset provided and then return the result.
> syntax
> {code}
> [ LIMIT { count } ]
> [ OFFSET start [ ROW | ROWS ] ]
> [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
> {code}
> Some new keywords(OFFSET,FETCH,ROW, ROWS,ONLY) are getting introduced so 
> users might need to see that they are not using them as column name or 
> something.
> WDYT, [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2823) Unable to connect to HBase version 1.2

2016-04-07 Thread pragathi (JIRA)
pragathi created PHOENIX-2823:
-

 Summary: Unable to connect to HBase version 1.2
 Key: PHOENIX-2823
 URL: https://issues.apache.org/jira/browse/PHOENIX-2823
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.5.2
 Environment: CDH5.5
Reporter: pragathi


I got the following exception  Call failed on IOException
org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: SYSTEM.CATALOG: 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment.getRegion()Lorg/apache/hadoop/hbase/regionserver/Region;
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84) 
. It was accompanied by 
Caused by: java.lang.NoSuchMethodError: 
org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment.getRegion()Lorg/apache/hadoop/hbase/regionserver/Region;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230548#comment-15230548
 ] 

ASF GitHub Bot commented on PHOENIX-2822:
-

Github user samarthjain commented on the pull request:

https://github.com/apache/phoenix/pull/158#issuecomment-206989941
  
The pull request looks great @churrodog. Just a couple of minor nits. I 
will get this committed once you have the changes in. 


> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2822 - Tests that extend BaseHBaseMa...

2016-04-07 Thread samarthjain
Github user samarthjain commented on the pull request:

https://github.com/apache/phoenix/pull/158#issuecomment-206989941
  
The pull request looks great @churrodog. Just a couple of minor nits. I 
will get this committed once you have the changes in. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230516#comment-15230516
 ] 

ASF GitHub Bot commented on PHOENIX-2822:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58903054
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java 
---
@@ -94,6 +85,20 @@ private long allocateBytes(long minBytes, long reqBytes) 
{
 return nBytes;
 }
 
+@VisibleForTesting void waitForBytesToFree(long minBytes, long 
startTimeMs) {
--- End diff --

Minor nit: Missing line carriage after @VisibleForTesting


> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2822 - Tests that extend BaseHBaseMa...

2016-04-07 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58903054
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/memory/GlobalMemoryManager.java 
---
@@ -94,6 +85,20 @@ private long allocateBytes(long minBytes, long reqBytes) 
{
 return nBytes;
 }
 
+@VisibleForTesting void waitForBytesToFree(long minBytes, long 
startTimeMs) {
--- End diff --

Minor nit: Missing line carriage after @VisibleForTesting


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-2822 - Tests that extend BaseHBaseMa...

2016-04-07 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58902469
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/memory/MemoryManagerTest.java ---
@@ -69,35 +76,38 @@ private static void sleepFor(long time) {
 }
 
 @Test
-public void testWaitForMemoryAvailable() {
-final GlobalMemoryManager gmm = new GlobalMemoryManager(100,8000);
+public void testWaitForMemoryAvailable() throws Exception {
+final GlobalMemoryManager gmm = spy(new GlobalMemoryManager(100, 
80));
 final ChildMemoryManager rmm1 = new ChildMemoryManager(gmm,100);
 final ChildMemoryManager rmm2 = new ChildMemoryManager(gmm,100);
+final CountDownLatch latch = new CountDownLatch(2);
 Thread t1 = new Thread() {
 @Override
 public void run() {
 MemoryChunk c1 = rmm1.allocate(50);
 MemoryChunk c2 = rmm1.allocate(50);
-sleepFor(4000);
+sleepFor(40);
 c1.close();
-sleepFor(2000);
+sleepFor(20);
 c2.close();
+latch.countDown();
 }
 };
 Thread t2 = new Thread() {
 @Override
 public void run() {
-sleepFor(2000);
+sleepFor(20);
 // Will require waiting for a bit of time before t1 frees 
the requested memory
-long startTime = System.currentTimeMillis();
+Stopwatch watch = new Stopwatch().start();
--- End diff --

Actually, I don't see this watch being used anywhere. Make sure you have 
the phoenix eclipse preferences imported. This should have been flagged as an 
unused variable warning.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230507#comment-15230507
 ] 

ASF GitHub Bot commented on PHOENIX-2822:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58902469
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/memory/MemoryManagerTest.java ---
@@ -69,35 +76,38 @@ private static void sleepFor(long time) {
 }
 
 @Test
-public void testWaitForMemoryAvailable() {
-final GlobalMemoryManager gmm = new GlobalMemoryManager(100,8000);
+public void testWaitForMemoryAvailable() throws Exception {
+final GlobalMemoryManager gmm = spy(new GlobalMemoryManager(100, 
80));
 final ChildMemoryManager rmm1 = new ChildMemoryManager(gmm,100);
 final ChildMemoryManager rmm2 = new ChildMemoryManager(gmm,100);
+final CountDownLatch latch = new CountDownLatch(2);
 Thread t1 = new Thread() {
 @Override
 public void run() {
 MemoryChunk c1 = rmm1.allocate(50);
 MemoryChunk c2 = rmm1.allocate(50);
-sleepFor(4000);
+sleepFor(40);
 c1.close();
-sleepFor(2000);
+sleepFor(20);
 c2.close();
+latch.countDown();
 }
 };
 Thread t2 = new Thread() {
 @Override
 public void run() {
-sleepFor(2000);
+sleepFor(20);
 // Will require waiting for a bit of time before t1 frees 
the requested memory
-long startTime = System.currentTimeMillis();
+Stopwatch watch = new Stopwatch().start();
--- End diff --

Actually, I don't see this watch being used anywhere. Make sure you have 
the phoenix eclipse preferences imported. This should have been flagged as an 
unused variable warning.


> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2822) Tests that extend BaseHBaseManagedTimeIT are very slow

2016-04-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230504#comment-15230504
 ] 

ASF GitHub Bot commented on PHOENIX-2822:
-

Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58902347
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/memory/MemoryManagerTest.java ---
@@ -19,19 +19,26 @@
 
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
+import static org.mockito.Matchers.anyLong;
+import static org.mockito.Mockito.atLeastOnce;
+import static org.mockito.Mockito.spy;
 
+import com.google.common.base.Stopwatch;
--- End diff --

Don't use the guava StopWatch here. We have run into conflicts before 
because of mismatches in guava versions. You can use the PhoenixStopWatch class 
here instead.


> Tests that extend BaseHBaseManagedTimeIT are very slow
> --
>
> Key: PHOENIX-2822
> URL: https://issues.apache.org/jira/browse/PHOENIX-2822
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.8.0
>Reporter: churro morales
>Assignee: churro morales
>  Labels: HBASEDEPENDENCIES
>
> Since I am trying to refactor out all the hbase private dependencies, I have 
> to constantly run tests to make sure I didn't break anything.  The tests that 
> extend BaseHBaseManagedTimeIT are very slow as they have to delete all 
> non-system tables after every test case.  This takes around 5-10 seconds to 
> accomplish.  This adds significant time to the test suite. 
> I created a new class named: BaseHBaseManagedTimeTableReuseIT and it creates 
> a random table name such that we dont have collisions for tests.  It also 
> doesn't do any cleanup after each test case or class because these table 
> names should be unique.  I moved about 30-35 tests out from 
> BaseHBaseManagedTimeIT to BaseHBaseManagedTimeTableReuseIT and it 
> significantly improved the overall time it takes to run tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-2822 - Tests that extend BaseHBaseMa...

2016-04-07 Thread samarthjain
Github user samarthjain commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/158#discussion_r58902347
  
--- Diff: 
phoenix-core/src/test/java/org/apache/phoenix/memory/MemoryManagerTest.java ---
@@ -19,19 +19,26 @@
 
 import static org.junit.Assert.assertTrue;
 import static org.junit.Assert.fail;
+import static org.mockito.Matchers.anyLong;
+import static org.mockito.Mockito.atLeastOnce;
+import static org.mockito.Mockito.spy;
 
+import com.google.common.base.Stopwatch;
--- End diff --

Don't use the guava StopWatch here. We have run into conflicts before 
because of mismatches in guava versions. You can use the PhoenixStopWatch class 
here instead.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-2810) Fixing IndexTool Dependencies

2016-04-07 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230388#comment-15230388
 ] 

churro morales commented on PHOENIX-2810:
-

Its not the deprecation I am worried about.  Its the fact that 
HFileOutputFormat is gone in 2.x version of HBase.  
LoadIncrementalFiles.doBulkLoad is available in branch 1 (all versions).  
https://github.com/apache/hbase/blob/rel/1.0.0/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java#L245
 



> Fixing IndexTool Dependencies
> -
>
> Key: PHOENIX-2810
> URL: https://issues.apache.org/jira/browse/PHOENIX-2810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Priority: Minor
>  Labels: HBASEDEPENDENCIES
> Attachments: PHOENIX-2810.patch
>
>
> IndexTool uses HFileOutputFormat which is deprecated.  Use HFileOutputFormat2 
> instead and fix other private dependencies for this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2820) Investigate why SortMergeJoinIT has a sort in the explain plan

2016-04-07 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230378#comment-15230378
 ] 

Maryann Xue commented on PHOENIX-2820:
--

[~jamestaylor], just checked, it was PHOENIX-2758 that caused this regression, 
instead of PHOENIX-2802.

> Investigate why SortMergeJoinIT has a sort in the explain plan 
> ---
>
> Key: PHOENIX-2820
> URL: https://issues.apache.org/jira/browse/PHOENIX-2820
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
> Fix For: 4.8.0
>
>
> After removing an unnecessary merge sort for aggregate queries in this[1] 
> commit, a sort started appearing in one of the explain plans for 
> SortMergeJoinIT. This sort is not required, so we need to figure out why it 
> is part of the explain plan now.
> [1]https://github.com/apache/phoenix/commit/15766625ab9a132f8d8ac625b026a5c56e62e879,
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2810) Fixing IndexTool Dependencies

2016-04-07 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230219#comment-15230219
 ] 

maghamravikiran commented on PHOENIX-2810:
--

Valid [~gabriel.reid] .  The patch applies neatly only on the master and breaks 
on 4.x - HBase-1.0 and 4.x-HBase-0.98. 
Holding off merging this patch .

> Fixing IndexTool Dependencies
> -
>
> Key: PHOENIX-2810
> URL: https://issues.apache.org/jira/browse/PHOENIX-2810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Priority: Minor
>  Labels: HBASEDEPENDENCIES
> Attachments: PHOENIX-2810.patch
>
>
> IndexTool uses HFileOutputFormat which is deprecated.  Use HFileOutputFormat2 
> instead and fix other private dependencies for this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230111#comment-15230111
 ] 

Pierre Lacave commented on PHOENIX-2809:


Actually the rogue column can be dropped by specifying the default CF 

{noformat}
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS "0".DUMMY;
No rows affected (0.238 seconds)
0: jdbc:phoenix:localhost> !outputformat csv
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
{noformat}

> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> upsert into test_alter values 
> (1,2,'add');
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> select * from test_alter;
> +-+-+---+
> | TI  | TI  | COL1  |
> +-+-+---+
> | 1   | 1   | add   |
> +-+-+---+
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:20 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger consequence, as I don't see an 
easy way to get back to a regular state, the duplicate column names cause 
issues in spark-sql because of ambiguity.



there are a few differences between the 2 columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY 

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:07 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


there are a few differences between the 2 columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY 

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 11:07 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, there are a few differences between the 2 
columns as stated in the describe

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 10:57 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP COLUMN IF EXISTS DUMMY;
Error: ERROR 506 (42817): Primary key column may not be dropped. 
columnName=DUMMY (state=42817,code=506)
java.sql.SQLException: ERROR 506 (42817): Primary key column may not be 
dropped. columnName=DUMMY
at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:422)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
at 
org.apache.phoenix.schema.MetaDataClient.dropColumn(MetaDataClient.java:3016)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDropColumnStatement$1.execute(PhoenixStatement.java:1047)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:326)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:324)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1345)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected 

[jira] [Comment Edited] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave edited comment on PHOENIX-2809 at 4/7/16 10:56 AM:
-

This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}




was (Author: pierre.lacave):
This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP IF EXISTS DUMMY;
Error: ERROR 602 (42P00): Syntax error. Missing "COLUMN" at line 1, column 27. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "COLUMN" at line 1, column 27.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1268)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1339)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<26>,1:26] at IF)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:350)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 

[jira] [Commented] (PHOENIX-2809) Alter table doesn't take into account current table definition

2016-04-07 Thread Pierre Lacave (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15230084#comment-15230084
 ] 

Pierre Lacave commented on PHOENIX-2809:


This issue can be reproduced with a unique column name.

{noformat}
0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS TEST_DUP (DUMMY VARCHAR 
CONSTRAINT pk PRIMARY KEY (DUMMY));
No rows affected (1.318 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
++
| DUMMY  |
++
++
No rows selected (0.174 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.215 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.169 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP ADD IF NOT EXISTS DUMMY VARCHAR;
No rows affected (6.457 seconds)
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.044 seconds)
0: jdbc:phoenix:localhost> ALTER TABLE TEST_DUP DROP IF EXISTS DUMMY;
Error: ERROR 602 (42P00): Syntax error. Missing "COLUMN" at line 1, column 27. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "COLUMN" at line 1, column 27.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1185)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1268)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1339)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<26>,1:26] at IF)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:350)
at org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 9 more
0: jdbc:phoenix:localhost> SELECT * FROM TEST_DUP;
+++
| DUMMY  | DUMMY  |
+++
+++
No rows selected (0.047 seconds)
{noformat}

Not being able to drop the column is a bigger issue, as I don't see an easy way 
to get back to a regular state, the duplicate column names cause issues in 
spark-sql because of ambiguity.


looking at the describe output, it appears the KEY_SEQ is the only difference 
between the two columns

{noformat}
0: jdbc:phoenix:localhost> !describe TEST_DUP
'TABLE_CAT','TABLE_SCHEM','TABLE_NAME','COLUMN_NAME','DATA_TYPE','TYPE_NAME','COLUMN_SIZE','BUFFER_LENGTH','DECIMAL_DIGITS','NUM_PREC_RADIX','NULLABLE','REMARKS','COLUMN_DEF','SQL_DATA_TYPE','SQL_DATETIME_SUB','CHAR_OCTET_LENGTH','ORDINAL_POSITION','IS_NULLABLE','SCOPE_CATALOG','SCOPE_SCHEMA','SCOPE_TABLE','SOURCE_DATA_TYPE','IS_AUTOINCREMENT','ARRAY_SIZE','COLUMN_FAMILY','TYPE_ID','VIEW_CONSTANT','MULTI_TENANT','KEY_SEQ'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','0','','','null','null','null','1','false','','','','null','','null','','12','','','1'
'','','TEST_DUP','DUMMY','12','VARCHAR','null','null','null','null','1','','','null','null','null','2','true','','','','null','','null','0','12','','','null'
{noformat}



> Alter table doesn't take into account current table definition
> --
>
> Key: PHOENIX-2809
> URL: https://issues.apache.org/jira/browse/PHOENIX-2809
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Biju Nair
>
> {{Alter table}} to add a new column with the column definition as an existing 
> column in the table succeeds while the expectation will be that the alter 
> will fail. Following is an example.
> {noformat}
> 0: jdbc:phoenix:localhost:2181:/hbase> create table test_alter (TI tinyint 
> not null primary key);
> No rows affected (1.299 seconds)
> 0: jdbc:phoenix:localhost:2181:/hbase> alter table test_alter add if not 
> exists TI tinyint, col1 varchar;
> No rows affected (15.962 seconds)
> 

[jira] [Commented] (PHOENIX-2810) Fixing IndexTool Dependencies

2016-04-07 Thread Gabriel Reid (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15229896#comment-15229896
 ] 

Gabriel Reid commented on PHOENIX-2810:
---

Sorry I took so long to take a look at this.

The patch doesn't work on the HBase 0.98 version, because things like 
Connection and RegionLocator didn't exist in HBase 0.98, and isn't directly 
compatible with the HBase 1.0 version because it makes use of a new overload of 
LoadIncrementalHFiles.doBulkLoad that isn't available in 1.0.

I'm no longer sure what the plan is for supporting 0.98 and 1.0 in terms of 
future releases, but I think it might be better to hold off on making this 
change until then. It's basically a cosmetic change for now anyway (the 
deprecated HFileOutputFormat simply delegates everything to 
HFileOutputFormat2), so while we'd be getting rid of the deprecation warning, 
we'd be adding in extra unnecessary differences between branches without a real 
advantage I think.

> Fixing IndexTool Dependencies
> -
>
> Key: PHOENIX-2810
> URL: https://issues.apache.org/jira/browse/PHOENIX-2810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: churro morales
>Priority: Minor
>  Labels: HBASEDEPENDENCIES
> Attachments: PHOENIX-2810.patch
>
>
> IndexTool uses HFileOutputFormat which is deprecated.  Use HFileOutputFormat2 
> instead and fix other private dependencies for this class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)