[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392232#comment-14392232
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~jamestaylor] Thanks a lot for the comments. To confirm a couple of issues.

1. >Implement getTableRef() as returning either null or by returning a static 
final constant TableRef for union. Seems like you wouldn't need to pass this 
through the constructor as it doesn't seem like it would ever be different.

The TableRef here is the temp schema TableRef. It will be passed in from 
constructor.

2. As for UnionPlan.explain(). My understanding is that it would get all the 
steps involved and return an ExplanPlan object, which will be used in 
PhoenixStatement. So I need to loop through all the subPlans to get the steps 
and include OrderBy and Limit. While explain(List planSteps) in 
UnionResultIterators is to get the explain for that particular iterator, i.e 
PeekingResultIterator. I tested Explain in my unit tests, it seemed it worked 
as the way I implemented. 



> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> Phoenix-1580-v5.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1683) Support HBase HA Query(timeline-consistent region replica read)

2015-04-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392223#comment-14392223
 ] 

Devaraj Das commented on PHOENIX-1683:
--

Looks fine to me from the Scan API usage point of view, but someone from 
Phoenix should take a look.

> Support HBase HA Query(timeline-consistent region replica read)
> ---
>
> Key: PHOENIX-1683
> URL: https://issues.apache.org/jira/browse/PHOENIX-1683
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Jeffrey Zhong
>Assignee: Rajeshbabu Chintaguntla
> Attachments: PHOENIX-1683.patch, PHOENIX-1683_v2.patch
>
>
> As HBASE-10070 is in HBase1.0, we could leverage this feature by providing a 
> new consistency level TIMELINE.
> Assumption: A user has already enabled a hbase table for timeline consistency
> In the connection property or by "ALTER SESSION SET CONSISTENCY = 'TIMELINE'" 
> statement, we could set current connection/session consistency level to 
> "TIMELINE" to take the advantage of TIMELINE read. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1798) UnsupportedOperationException throws from BaseResultIterators.getIterators

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392171#comment-14392171
 ] 

James Taylor commented on PHOENIX-1798:
---

Thanks for filing this, [~qqqc851001]. [~samarthjain] - please confirm and 
commit this patch. We'll likely want to roll a new set of RCs with this fix.

> UnsupportedOperationException throws from BaseResultIterators.getIterators
> --
>
> Key: PHOENIX-1798
> URL: https://issues.apache.org/jira/browse/PHOENIX-1798
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Cen Qi
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> If there throw an StaleRegionBoundaryCacheException, concatIterators will be 
> reassigned by Collections.emptyList(). And then call the add method again, it 
> will throw the UnsupportedOperationException.
> Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:589)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
> at PhoenixDemo.main(PhoenixDemo.java:12)
> Caused by: java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:535)
> ... 7 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1015) Support joining back to data table row from local index when query condition involves leading columns in local index

2015-04-01 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392163#comment-14392163
 ] 

Tao Yang edited comment on PHOENIX-1015 at 4/2/15 5:11 AM:
---

Hi  [~rajesh23]
I have some questions about local index of phoenix in the scene below:
A table has multiple columns and I need to query from this table by random 
combinations of these columns.
For example: a table has three columns(c1,c2,c3) and queries should be made by 
some differnt combinations(one or two or three columns in where clause)

You are the committer of both hindex and phoenix. I have learned hindex by your 
introduction document (From here: 
http://www.slideshare.net/rajeshbabuchintaguntla/apache-con-hindex), and I 
think phoenix also can make local index work for the query above( generate 
range scan for every column in where clause, then merge in memory to get 
required results ), so I just need to create one local index for every column. 
But it seems to be different with what I think.

Here is the schema I used:
CREATE TABLE TEST_DATA_TABLE(PK VARCHAR PRIMARY KEY,COLUMN1 VARCHAR,COLUMN2 
VARCHAR,COLUMN3 BIGINT) SALT_BUCKETS=8; 
CREATE LOCAL INDEX LI_1 ON TEST_DATA_TABLE(COLUMN1);
CREATE LOCAL INDEX LI_2 ON TEST_DATA_TABLE(COLUMN2);

I use the sql below to get the number of required data by two conditions in 
where clause:
SELECT COUNT(*) FROM TEST_DATA_TABLE WHERE COLUMN1='1' AND COLUMN2='1’

Here is the explain of the sql above:
| CLIENT 8-CHUNK PARALLEL 8-WAY FULL SCAN OVER TEST_DATA_TABLE |
| SERVER FILTER BY (COLUMN1 = '1' AND COLUMN2 = '1') |
| SERVER AGGREGATE INTO SINGLE ROW |

According to the plan above, we know the local index does not work for this 
query. 

Here are my questions:
(1) Why local index of phoenix does not support this query? 
(2) Is there a plan to improve?

Thank you very much!


was (Author: tao yang):
Hi  [~rajesh23]
I have some questions about local index of phoenix in the scene below:
A table has multiple columns and I need to query from this table by random 
combinations of these columns.
For example: a table has three columns(c1,c2,c3) and queries should be made by 
some differnt combinations(one or two or three columns in where clause)

You are the committer of both hindex and phoenix. I have learned hindex by your 
introduction document (From here: 
http://www.slideshare.net/rajeshbabuchintaguntla/apache-con-hindex), and I 
think phoenix also can make local index work for the query above( generate 
range scan for every column in where clause, then merge in memory to get 
required results ), so I just need to create one local index for every column. 
But it seems to be different with what I think.

Here is the schema I used:
CREATE TABLE TEST_DATA_TABLE(PK VARCHAR PRIMARY KEY,COLUMN1 VARCHAR,COLUMN2 
VARCHAR,COLUMN3 BIGINT) SALT_BUCKETS=8; 
CREATE LOCAL INDEX LI_1 ON TEST_DATA_TABLE(COLUMN1);
CREATE LOCAL INDEX LI_2 ON TEST_DATA_TABLE(COLUMN2);

I use the sql below to get the number of required data by two conditions in 
where clause:
SELECT COUNT(*) FROM TEST_DATA_TABLE WHERE COLUMN1='1' AND COLUMN2='1’

Here is the explain of the sql above:
+--+
|   PLAN   |
+--+
| CLIENT 8-CHUNK PARALLEL 8-WAY FULL SCAN OVER TEST_DATA_TABLE |
| SERVER FILTER BY (COLUMN1 = '1' AND COLUMN2 = '1') |
| SERVER AGGREGATE INTO SINGLE ROW |
+--+   

According to the plan above, we know the local index does not work for this 
query. 

Here are my questions:
(1) Why local index of phoenix does not support this query? 
(2) Is there a plan to improve?

Thank you very much!

> Support joining back to data table row from local index when query condition 
> involves leading columns in local index
> 
>
> Key: PHOENIX-1015
> URL: https://issues.apache.org/jira/browse/PHOENIX-1015
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 5.0.0, 4.1.0
>
> Attachments: PHOENIX-1015.patch, PHOENIX-1015_v6.patch, 
> PHOENIX-1015_v7.patch, PHOENIX-1015_v8.patch, PHOENIX-1015_v8.rar, 
> PHOENIX-1015_v9.patch
>
>
> When a query involves more columns to project than columns in index and query 
> condition involves leading columns in local index then first we can get 
> matching rowkeys from local index table and then get the required columns 
> from data table. In local index both data region and index region co-reside 
> in the same RS, we can call get on data region to get the missing columns in 
> the index, without any n/w overhead. So it's efficient. 



--
This message was s

[jira] [Commented] (PHOENIX-1015) Support joining back to data table row from local index when query condition involves leading columns in local index

2015-04-01 Thread Tao Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392163#comment-14392163
 ] 

Tao Yang commented on PHOENIX-1015:
---

Hi  [~rajesh23]
I have some questions about local index of phoenix in the scene below:
A table has multiple columns and I need to query from this table by random 
combinations of these columns.
For example: a table has three columns(c1,c2,c3) and queries should be made by 
some differnt combinations(one or two or three columns in where clause)

You are the committer of both hindex and phoenix. I have learned hindex by your 
introduction document (From here: 
http://www.slideshare.net/rajeshbabuchintaguntla/apache-con-hindex), and I 
think phoenix also can make local index work for the query above( generate 
range scan for every column in where clause, then merge in memory to get 
required results ), so I just need to create one local index for every column. 
But it seems to be different with what I think.

Here is the schema I used:
CREATE TABLE TEST_DATA_TABLE(PK VARCHAR PRIMARY KEY,COLUMN1 VARCHAR,COLUMN2 
VARCHAR,COLUMN3 BIGINT) SALT_BUCKETS=8; 
CREATE LOCAL INDEX LI_1 ON TEST_DATA_TABLE(COLUMN1);
CREATE LOCAL INDEX LI_2 ON TEST_DATA_TABLE(COLUMN2);

I use the sql below to get the number of required data by two conditions in 
where clause:
SELECT COUNT(*) FROM TEST_DATA_TABLE WHERE COLUMN1='1' AND COLUMN2='1’

Here is the explain of the sql above:
+--+
|   PLAN   |
+--+
| CLIENT 8-CHUNK PARALLEL 8-WAY FULL SCAN OVER TEST_DATA_TABLE |
| SERVER FILTER BY (COLUMN1 = '1' AND COLUMN2 = '1') |
| SERVER AGGREGATE INTO SINGLE ROW |
+--+   

According to the plan above, we know the local index does not work for this 
query. 

Here are my questions:
(1) Why local index of phoenix does not support this query? 
(2) Is there a plan to improve?

Thank you very much!

> Support joining back to data table row from local index when query condition 
> involves leading columns in local index
> 
>
> Key: PHOENIX-1015
> URL: https://issues.apache.org/jira/browse/PHOENIX-1015
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: rajeshbabu
>Assignee: rajeshbabu
> Fix For: 5.0.0, 4.1.0
>
> Attachments: PHOENIX-1015.patch, PHOENIX-1015_v6.patch, 
> PHOENIX-1015_v7.patch, PHOENIX-1015_v8.patch, PHOENIX-1015_v8.rar, 
> PHOENIX-1015_v9.patch
>
>
> When a query involves more columns to project than columns in index and query 
> condition involves leading columns in local index then first we can get 
> matching rowkeys from local index table and then get the required columns 
> from data table. In local index both data region and index region co-reside 
> in the same RS, we can call get on data region to get the missing columns in 
> the index, without any n/w overhead. So it's efficient. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392143#comment-14392143
 ] 

Maryann Xue commented on PHOENIX-1580:
--

Sure, [~jamestaylor]. Will get it done by this week.

[~ayingshu] It would also be useful to check the explain plan in your unit 
tests.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> Phoenix-1580-v5.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392124#comment-14392124
 ] 

James Taylor commented on PHOENIX-1580:
---

Looks good, [~ayingshu]. Some minor stuff:
- Remove this function from UnionCompiler as it's not called and not needed:
{code}
+public static void checkForOrderByLimitInUnionAllSelect(SelectStatement 
select) throws SQLException {
+if (select.getOrderBy() != null && !select.getOrderBy().isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+if (select.getLimit() != null) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+}
+
{code}
- Add static constant for "unionAllTable".getBytes() in UnionCompiler
- Remove these from SQLExceptionCode:
{code}
+ ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED(523, "42900", "ORDER BY in a 
Union All query is not allowed"),
+ LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED(524, "42901", "LIMIT in a Union 
All query is not allowed"),
{code}
- Implement the methods in UnionPlan more appropriately:
  - Don't store private List plans, but instead have a final 
ResultIterators iterators and initialize it to new 
UnionResultIterators(this.getPlans()) in the constructor
  - Implement getSplits() as iterators.getSplits() and getScans() as 
iterators.getScans().
  - Implement getTableRef() as returning either null or by returning a static 
final constant TableRef for union. Seems like you wouldn't need to pass this 
through the constructor as it doesn't seem like it would ever be different.
  - Declare final boolean isDegenerate and calculate it in the constructor by 
looping through all plans and if any are not context.getScanRanges() == 
ScanRanges.NOTHING, then stop the loop and set this.isDegenarate to false. The 
implement isDegenerate() as returning this.isDegenerate.
- Add a step at the beginning of the UnionPlan.getExplainPlan() like "UNION " + 
iterators.size() + " queries\n"
- Remove this code from UnionPlan.getExplainPlan():
{code}
+if (context.getSequenceManager().getSequenceCount() > 0) {
+int nSequences = context.getSequenceManager().getSequenceCount();
+steps.add("CLIENT RESERVE VALUES FROM " + nSequences + " SEQUENCE" 
+ (nSequences == 1 ? "" : "S"));
+}
{code}
- Instead of looping through plans in UnionPlan.explain(), call 
iterators.explain(steps).
- In UnionResultIterators, do all the work in the constructor - you don't want 
to have to construct a list with each call to getScans() or getRanges().
- No need to copy the List in UnionResultIterators. Just set the 
final List plans.
- Remove this check from UnionResultIterators.explain() as it's not necessary: 
if (iterators != null && !iterators.isEmpty()). Implement it like this instead:
{code}
+for (QueryPlan plan : plans) {
+List planSteps = plan.getExplainPlan().getPlanSteps();
+steps.addAll(planSteps);
+}
{code}



> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> Phoenix-1580-v5.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392124#comment-14392124
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/2/15 4:31 AM:
---

Looks good, [~ayingshu]. Some minor stuff:
- Remove this function from UnionCompiler as it's not called and not needed:
{code}
+public static void checkForOrderByLimitInUnionAllSelect(SelectStatement 
select) throws SQLException {
+if (select.getOrderBy() != null && !select.getOrderBy().isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+if (select.getLimit() != null) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+}
+
{code}
- Add static constant for "unionAllTable".getBytes() in UnionCompiler
- Remove these from SQLExceptionCode:
{code}
+ ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED(523, "42900", "ORDER BY in a 
Union All query is not allowed"),
+ LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED(524, "42901", "LIMIT in a Union 
All query is not allowed"),
{code}
- Implement the methods in UnionPlan more appropriately:
   - Don't store private List plans, but instead have a final 
ResultIterators iterators and initialize it to new 
UnionResultIterators(this.getPlans()) in the constructor
   - Implement getSplits() as iterators.getSplits() and getScans() as 
iterators.getScans().
   - Implement getTableRef() as returning either null or by returning a static 
final constant TableRef for union. Seems like you wouldn't need to pass this 
through the constructor as it doesn't seem like it would ever be different.
   - Declare final boolean isDegenerate and calculate it in the constructor by 
looping through all plans and if any are not context.getScanRanges() == 
ScanRanges.NOTHING, then stop the loop and set this.isDegenarate to false. The 
implement isDegenerate() as returning this.isDegenerate.
   - Add a step at the beginning of the UnionPlan.getExplainPlan() like "UNION 
" + iterators.size() + " queries\n"
   - Remove this code from UnionPlan.getExplainPlan():
{code}
+if (context.getSequenceManager().getSequenceCount() > 0) {
+int nSequences = context.getSequenceManager().getSequenceCount();
+steps.add("CLIENT RESERVE VALUES FROM " + nSequences + " SEQUENCE" 
+ (nSequences == 1 ? "" : "S"));
+}
{code}
   - Instead of looping through plans in UnionPlan.explain(), call 
iterators.explain(steps).
- In UnionResultIterators, do all the work in the constructor - you don't want 
to have to construct a list with each call to getScans() or getRanges().
- No need to copy the List in UnionResultIterators. Just set the 
final List plans.
- Remove this check from UnionResultIterators.explain() as it's not necessary: 
if (iterators != null && !iterators.isEmpty()). Implement it like this instead:
{code}
+for (QueryPlan plan : plans) {
+List planSteps = plan.getExplainPlan().getPlanSteps();
+steps.addAll(planSteps);
+}
{code}




was (Author: jamestaylor):
Looks good, [~ayingshu]. Some minor stuff:
- Remove this function from UnionCompiler as it's not called and not needed:
{code}
+public static void checkForOrderByLimitInUnionAllSelect(SelectStatement 
select) throws SQLException {
+if (select.getOrderBy() != null && !select.getOrderBy().isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+if (select.getLimit() != null) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED).setMessage(".").build().buildException();
+}
+}
+
{code}
- Add static constant for "unionAllTable".getBytes() in UnionCompiler
- Remove these from SQLExceptionCode:
{code}
+ ORDER_BY_IN_UNIONALL_SELECT_NOT_SUPPORTED(523, "42900", "ORDER BY in a 
Union All query is not allowed"),
+ LIMIT_IN_UNIONALL_SELECT_NOT_SUPPORTED(524, "42901", "LIMIT in a Union 
All query is not allowed"),
{code}
- Implement the methods in UnionPlan more appropriately:
  - Don't store private List plans, but instead have a final 
ResultIterators iterators and initialize it to new 
UnionResultIterators(this.getPlans()) in the constructor
  - Implement getSplits() as iterators.getSplits() and getScans() as 
iterators.getScans().
  - Implement getTableRef() as returning either null or by returning a static 
final constant TableRef for union. Seems like you wouldn't need to pass this 
through the constructor as it doesn't seem like it would ever be different.
  - Declare final boolean isDegenerate and calculate it in the constructor by 
lo

[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14392080#comment-14392080
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~jamestaylor], [~maryannxue]: Thank a lot for reviewing the patch. Attached 
the patch.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> Phoenix-1580-v5.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-1580:
-
Attachment: Phoenix-1580-v5.patch

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> Phoenix-1580-v5.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-971) Query server

2015-04-01 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-971?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated PHOENIX-971:
-
Attachment: PHOENIX-971.00.patch

Here's a simple patch for Phoenix that adds a query server and thin client via 
Apache Calcite. Most of the good work has been happening over on the Calcite 
side (thanks [~julianhyde]!). This patch is building from a Calcite snapshot 
based on a couple outstanding patches on [my 
branch|https://github.com/ndimiduk/incubator-calcite/tree/avatica-to-prod].

The server is launched using bin/queryserver.py, which depends on having 
HBASE_CONF_PATH in the environment to find hbase-site.xml. You can then use 
bin/sqlline-thin.py to connect using the thin driver. The same arguments should 
work as with the regular one, the difference being that instead of a zookeeper 
quorum, you point it at the query server host and port (i.e., 
http://localhost:8765). It will add any missing connection components based on 
defaults.

Also note that select * from WEB_STATS example table does not work right now 
due to a known Date datatype handling bug, CALCITE-660. Other common column 
types are working.

[~jamestaylor] [~gabriel.reid] please have a look, in particular at my 
additions to the maven structure and new .py scripts.

> Query server
> 
>
> Key: PHOENIX-971
> URL: https://issues.apache.org/jira/browse/PHOENIX-971
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>Assignee: Nick Dimiduk
> Fix For: 5.0.0
>
> Attachments: PHOENIX-971.00.patch, image-2.png
>
>
> Host the JDBC driver in a query server process that can be deployed as a 
> middle tier between lighter weight clients and Phoenix+HBase. This would 
> serve a similar optional role in Phoenix deployments as the 
> [HiveServer2|https://cwiki.apache.org/confluence/display/Hive/Setting+Up+HiveServer2]
>  does in Hive deploys.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391977#comment-14391977
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

Yes. I added more data for Order by to show different groups. 

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-1580:
-
Comment: was deleted

(was: [~maryannxue] Thanks for the patch. Sorry I missed that line. With the 
wrapping select the code looks good. Thanks for the help!)

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391974#comment-14391974
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/2/15 2:07 AM:
--

[~maryannxue] Thanks for the patch. Sorry I missed that line. With the wrapping 
select the code looks good. Thanks for the help!


was (Author: aliciashu):
[~maryannxue] Thanks for the patch. Sorry I missed that line. With the wrapping 
select the code looks good. Thanks for the help!

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391973#comment-14391973
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~maryannxue] Thanks for the patch. Sorry I missed that line. With the wrapping 
select the code looks good. Thanks for the help!

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391974#comment-14391974
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~maryannxue] Thanks for the patch. Sorry I missed that line. With the wrapping 
select the code looks good. Thanks for the help!

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391963#comment-14391963
 ] 

James Taylor commented on PHOENIX-1580:
---

Thanks, [~maryannxue]. No problem on the subquery work - if you don't plan on 
doing it in the next couple of days, then please file a JIRA so we don't lose 
track of it.

[~ayingshu] - please incorporate your changes on top of this patch and update 
your test cases as Maryann suggested and we should be good to go.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391937#comment-14391937
 ] 

Maryann Xue commented on PHOENIX-1580:
--

[~ayingshu] Could you please update your test cases as well?

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-1580:
-
Attachment: Phoenix-1580-v4.patch

Updated. Was going to remove unnecessary logic in ParseNodeFactory.select(), 
but ended up removing too much. Sorry about that, [~ayingshu].

Meanwhile, did a little test with having union in a subquery, [~jamestaylor]. 
With the outer query in place, we could get it for free. But we still need to 
fix some aliases in the union query. I got it working with some walk around but 
need to find a better solution for all cases. Let's address that in another 
issue maybe.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, Phoenix-1580-v4.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/2/15 1:15 AM:
--

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Also got a lot of NULL result back.

I have made all the changes asked by [~jamestaylor]


was (Author: aliciashu):
[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Also got a lot of NULL result back. Can 
you work on it if you think it is better later. I have clearly stated many 
database systems do not wrap UNION ALL with dummy select *.

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391880#comment-14391880
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~jamestaylor], [~maryannxue] Thanks a lot for the help! I implemented myself 
the wrapping select based on the representation from Maryann's. Now tests are 
running. If all passed, I would submit the patch.  

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1799) Provide parameter metadata for prepared create table statements

2015-04-01 Thread Nick Dimiduk (JIRA)
Nick Dimiduk created PHOENIX-1799:
-

 Summary: Provide parameter metadata for prepared create table 
statements
 Key: PHOENIX-1799
 URL: https://issues.apache.org/jira/browse/PHOENIX-1799
 Project: Phoenix
  Issue Type: Improvement
Reporter: Nick Dimiduk


While working on PHOENIX-971, I ran into an issue in our IT code where we 
support prepared CREATE TABLE statements with bound parameters. It seems we 
don't provide any ParameterMetaData, other than knowledge about the number of 
bind slots. In turn, Calcite is unable to describe the statement's signature 
over RPC. This ticket proposes we infer ParameterMetaData for this statement 
case so that we can allow all tests to run vs the query server.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391754#comment-14391754
 ] 

Maryann Xue commented on PHOENIX-1580:
--

I checked other test cases. And indeed, the flags all got wrong, but I think 
that's because of the incorrect parsing or call chain. Should have checked. 
Sorry about that.

Given that we do not have a union query representation, it might be the best 
way to represent it so far. Or you can create an union statement class if you 
want. But I really don't think it is a good idea to represent "select col from 
a union all select col from b order by col" with a structure like "select col 
from a order by col (select col from b)".

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391725#comment-14391725
 ] 

Maryann Xue commented on PHOENIX-1580:
--

I am very sure I can pass all your test cases at least. My patch already 
includes the below lines:
{code}
@@ -58,7 +58,7 @@ public class ParseNodeRewriter extends 
TraverseAllParseNodeVisitor {
 public static SelectStatement rewrite(SelectStatement statement, 
ParseNodeRewriter rewriter) throws SQLException {
 Map aliasMap = rewriter.getAliasMap();
 TableNode from = statement.getFrom();
-TableNode normFrom = from.accept(new TableNodeRewriter(rewriter));
+TableNode normFrom = from == null ? null : from.accept(new 
TableNodeRewriter(rewriter));
 ParseNode where = statement.getWhere();
 ParseNode normWhere = where;
 if (where != null) {
{code}

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391708#comment-14391708
 ] 

James Taylor commented on PHOENIX-1580:
---

You can add this check in ParseNodeRewriter:
{code}
public static SelectStatement rewrite(SelectStatement statement, 
ParseNodeRewriter rewriter) throws SQLException {
Map aliasMap = rewriter.getAliasMap();
TableNode from = statement.getFrom();
TableNode normFrom = from == null ? null : from.accept(new 
TableNodeRewriter(rewriter));
{code}
The ParseNodeRewriter is used to apply changes to a statement. Since the 
statement itself is immutable, a traversal is made to copy the parts of the 
statement that has changed.


> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391684#comment-14391684
 ] 

Devaraj Das commented on PHOENIX-1580:
--

[~aliciashu] please investigate what could be causing the test failures. Both 
[~maryannxue] & [~jamestaylor] are very experienced in the Phoenix codebase - 
so their feedback needs to be really considered.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 11:00 PM:
---

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Also got a lot of NULL result back. Can 
you work on it if you think it is better later. I have clearly stated many 
database systems do not wrap UNION ALL with dummy select *.

I have made all the changes asked by [~jamestaylor]


was (Author: aliciashu):
[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Also got a lot of NULL result back. Still 
not clear what the additional select wrapping gains us. Can we make my patch 
without the additional select wrapping? You can always work on it if you think 
it is better later. I have clearly stated many database systems do not wrap 
UNION ALL with dummy select *.

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 10:52 PM:
---

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Also got a lot of NULL result back. Still 
not clear what the additional select wrapping gains us. Can we make my patch 
without the additional select wrapping? You can always work on it if you think 
it is better later. I have clearly stated many database systems do not wrap 
UNION ALL with dummy select *.

I have made all the changes asked by [~jamestaylor]


was (Author: aliciashu):
[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Still not clear what the additional 
select wrapping gains us. Can we make my patch without the additional select 
wrapping? You can always work on it if you think it is better later.

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 10:42 PM:
---

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Still not clear what the additional 
select wrapping gains us. Can we make my patch without the additional select 
wrapping? You can always work on it if you think it is better later.

I have made all the changes asked by [~jamestaylor]


was (Author: aliciashu):
[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Still not clear what the additional 
select wrapping gains us. Can we make my patch without the additional select 
wrapping?

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 10:39 PM:
---

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

Also many existing tests failed. Those all passed before your patch. Looks like 
ParameterCount gave us trouble again. Still not clear what the additional 
select wrapping gains us. Can we make my patch without the additional select 
wrapping?

I have made all the changes asked by [~jamestaylor]


was (Author: aliciashu):
[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391229#comment-14391229
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 10:37 PM:


Thanks for the patch, [~ayingshu]. I think it's pretty close at this point. 
It's probably easiest if [~maryannxue] adjusts your patch to deal with creating 
the proper outer select node structure. Here's a little bit of feedback.
- Looks like your indenting is off (3 spaces instead of 4 - see 
http://phoenix.apache.org/contributing.html#Code_conventions). This makes it 
more difficult to see where the actual changes are. 
{code}
 public QueryCompiler(PhoenixStatement statement, SelectStatement select, 
ColumnResolver resolver, List targetColumns, 
ParallelIteratorFactory parallelIteratorFactory, SequenceManager 
sequenceManager, boolean projectTuples) throws SQLException {
-this.statement = statement;
-this.select = select;
-this.resolver = resolver;
-this.scan = new Scan();
-this.targetColumns = targetColumns;
-this.parallelIteratorFactory = parallelIteratorFactory;
-this.sequenceManager = sequenceManager;
-this.projectTuples = projectTuples;
-this.useSortMergeJoin = 
select.getHint().hasHint(Hint.USE_SORT_MERGE_JOIN);
-this.noChildParentJoinOptimization = 
select.getHint().hasHint(Hint.NO_CHILD_PARENT_JOIN_OPTIMIZATION);
-if 
(statement.getConnection().getQueryServices().getLowestClusterHBaseVersion() >= 
PhoenixDatabaseMetaData.ESSENTIAL_FAMILY_VERSION_THRESHOLD) {
-this.scan.setAttribute(LOAD_COLUMN_FAMILIES_ON_DEMAND_ATTR, 
QueryConstants.TRUE);
-}
-if (select.getHint().hasHint(Hint.NO_CACHE)) {
-scan.setCacheBlocks(false);
-}
+   this.statement = statement;
+   this.select = select;
+   this.resolver = resolver;
+   this.scan = new Scan();
+   this.targetColumns = targetColumns;
+   this.parallelIteratorFactory = parallelIteratorFactory;
+   this.sequenceManager = sequenceManager;
+   this.projectTuples = projectTuples;
+   this.useSortMergeJoin = 
select.getHint().hasHint(Hint.USE_SORT_MERGE_JOIN);
+   this.noChildParentJoinOptimization = 
select.getHint().hasHint(Hint.NO_CHILD_PARENT_JOIN_OPTIMIZATION);
+   if 
(statement.getConnection().getQueryServices().getLowestClusterHBaseVersion() >= 
PhoenixDatabaseMetaData.ESSENTIAL_FAMILY_VERSION_THRESHOLD) {
+   this.scan.setAttribute(LOAD_COLUMN_FAMILIES_ON_DEMAND_ATTR, 
QueryConstants.TRUE);
+   }
+   if (select.getHint().hasHint(Hint.NO_CACHE)) {
+   scan.setCacheBlocks(false);
+   }
+
+   scan.setCaching(statement.getFetchSize());
+   this.originalScan = ScanUtil.newScan(scan);
+   if (!select.getSelects().isEmpty()) {
+   this.isUnionAll = true;
+   } else {
+   this.isUnionAll = false;
+   }
+}
{code}
- For UnionResultIterators, you've implemented getIterators() correctly so the 
merge sort should work now, but I think it'd be best to pass through the 
List and let it create the List (i.e. move 
the code you've already written in UnionPlan to UnionResultIterators). Just 
always create the UnionResultIterators in UnionPlan and in the else branch, 
just do a iterators.getIterators() to create the ConcatResultIterator. The 
reason is that your UnionResultIterators should also properly implement close() 
by calling close() on all the iterators, getScans() by combining the getScans() 
from all QueryPlans, getRanges() by combining all the getRanges() from all 
QueryPlans, size() by returning scans.size(), and explain() by calling 
explain() on each iterator(). Note that these scans and ranges are across 
different, multiple tables, so we'll need to see if/how these are used. I think 
at this point it's mainly just the size of each list that's used for display of 
the explain plan, so I think combining them will be ok (and give the user some 
feedback on how many scans and running for the union query.
- Also, some thought needs to go into the explain plan produced by UnionPlan. I 
think holding onto UnionResultIterators and calling explain() on it will get 
you part way there. The other part is making sure the ORDER BY and LIMIT info 
is displayed as expected.
- -In your ParseNodeFactory.select(), I think you'll want to do something 
different for the outer select. It might not matter, as perhaps this will be 
take care of at compile time based on the unioned statements, but take care not 
to put extra work on the compiler. I think [~maryannxue] can take your patch 
and fix this part. I've made a few changes below.-
{code}
+public SelectStatement select(List statements, 
List orderBy, LimitNode limit, int bindCount, boolean isAggregate) 
{
+boolean isUnion = statements.size() > 1

[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391631#comment-14391631
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~maryannxue] After applying your patch, almost all of my tests failed with 
NPE. The NPE was from ParseNodeRewriter.java. So I added the following in 
ParseNodeRewriter.java. Does it look ok?
if (from == null)
return statement;

I have made all the changes asked by [~jamestaylor]

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-1580:
-
Attachment: Phoenix-1580-v3.patch

[~ayingshu] Here's a patch with the select statement structure updated. You can 
compare with your earlier patch and see that the difference is just a few lines 
of code but the structure looks much better now. And also the method 
ParseNodeFactory.select() has been reduced to only 4 lines. There is actually 
no need to set any special flag for the outer query and none of these flags 
would be used anyway, since as [~jamestaylor] said, things will all be taken 
care of by sub-select compilation.

You can work from here and make other changes [~jamestaylor] has suggested. 
Indenting at first, hopefully. 

We should be able to support UNION ALL in subqueries, but it requires some 
extra adjustment. I'll do that part.

You were arguing that ORDER-BY expressions should be compiled against the inner 
query tableRef and in your case it worked that way. But it's interesting that 
your test case for union with order-by contains only two rows of data, and the 
result verification has nothing to do with the ordering, so I assume it would 
work whatever way you implement it. You can't do wrong with such test cases. So 
please do make sure that your test cases are sophisticated enough to reveal the 
problems they should.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, Phoenix-1580-v3.patch, phoenix-1580-v1-wipe.patch, 
> phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Bound parameters in create table statements

2015-04-01 Thread Nick Dimiduk
No crucial for functionality, but a component of our test suite. My goal
with this patch was to prove confidence in the implementation by having all
IT tests be parameterized to run directly against Phoenix, and against
Phoenix through the query server. Since this is used by the tests, it
stymies this goal.

On Wed, Apr 1, 2015 at 12:11 PM, James Taylor 
wrote:

> For the SELECT example you use, Phoenix would infer the type based on
> the column type of ID. We don't support parameterized precision/scale
> create statements as you've shown. It's really only for the pre-split
> information, and that's mainly because there's not a good way to pass
> arbitrary byte[] values through constants - it's much easier to bind
> them with stmt.setBytes(1, arbitraryByteArray);
>
> I think it's a detail, though, that can be left for later IMHO. It's
> not crucial functionality.
>
> FWIW, Phoenix infers types whenever possible, but sometimes it's
> ambiguous. For example:
> SELECT ? + ? FROM T;
> In theory, param 1 could be bound to a TIMESTAMP and param 2 to an
> INTEGER. Or they could both be DECIMAL, etc. If Phoenix can't figure
> it out, we use null for the type in the metadata APIs.
>
> On Wed, Apr 1, 2015 at 11:49 AM, Nick Dimiduk  wrote:
> > Adding back dev@calcite
> >
> > On Wed, Apr 1, 2015 at 11:48 AM, Nick Dimiduk 
> wrote:
> >
> >> Poking around with HSQLDB, it seems parameter metadata is made available
> >> after statement preparation for select statements. (Presumably inferred
> >> from column type, as in "SELECT * FROM TEST_TABLE WHERE id = ?". It does
> >> not support parameterized create statements:
> >>
> >> user=> (.prepareStatement conn "CREATE TABLE TEST_TABLE_P(id INTEGER NOT
> >> NULL, pk varchar(?) NOT NULL)")
> >>
> >> HsqlException unexpected token: ?  org.hsqldb.error.Error.parseError
> (:-1)
> >>
> >> I think that if Phoenix is going to support parameterized create table
> >> statements, it should infer parameter types and populate
> ParameterMetaData
> >> accordingly.
> >>
> >> On Tue, Mar 31, 2015 at 1:06 PM, Nick Dimiduk 
> wrote:
> >>
> >>> Hi Gabriel,
> >>>
> >>> Yes, we do this in the Phoenix test harness for parameterizing split
> >>> points. See o.a.p.q.BaseTest#createTestTable(String, String, byte[][],
> >>> Long, boolean). I ran into this while porting QueryIT to run vs. the
> query
> >>> server.
> >>>
> >>> -n
> >>>
> >>> On Tue, Mar 31, 2015 at 11:58 AM, Gabriel Reid  >
> >>> wrote:
> >>>
>  Could you explain how you're using prepared statements for DDL
>  statements?
>  Are you parameterizing parts of the DDL statements with question
> marks to
>  be filled in by the PreparedStatement parameters?
> 
>  On Tue, Mar 31, 2015 at 3:48 AM Nick Dimiduk 
> wrote:
> 
>  > Working on PHOENIX-971, I'm wondering what the expected behavior
>  should be
>  > for PreparedStatements created from CREATE TABLE sql with
> parameters.
>  > Calcite's Avatica depends on the statement to identify the parameter
>  types
>  > at compile time, and return meaningful values for method
> invocations on
>  > ParameterMetaData. It looks like Phoenix's CreateTableCompiler is
>  > recognizing the number of parameters in my sql, but is not inferring
>  type
>  > information.
>  >
>  > My question is: should Avatica be more flexible in allowing "fuzzy"
>  > signatures for PreparedStatements, or should Phoenix's
>  > StatementPlan#compile methods be determining parameter types in all
>  cases?
>  >
>  > Thanks,
>  > Nick
>  >
> 
> >>>
> >>>
> >>
>


[jira] [Commented] (PHOENIX-1794) Support Long.MIN_VALUE for phoenix BIGINT type.

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391294#comment-14391294
 ] 

James Taylor commented on PHOENIX-1794:
---

Rather than just treating MIN_LONG specially, we should just generally handle 
constants that don't fit into a Long as a BigDecimal. The code is nearly 
identical, but we get more bang-for-the-buck. Just check if bd <= MAX_LONG. If 
yes, create the literal (use bd.longValue() instead of the string as there's no 
need to reparse the String). If it's too big, just keep it as a BigDecimal - 
your test has shown that this works fine and is coerced to a long when possible 
in the UPSERT statement. You can add a few more tests around when it's bigger 
than ABS(MIN_LONG) too. Also, I'd do the same range check in DOUBLE in the 
grammar file so we handle that case nicely as well, since you're mucking around 
with it anyway.
{code}
--- phoenix-core/src/main/antlr3/PhoenixSQL.g   (revision 
2bf8c6788efb6dad7513f7bb14d2e9d75d7b50e3)
+++ phoenix-core/src/main/antlr3/PhoenixSQL.g   (revision )
@@ -204,7 +204,9 @@
 private int anonBindNum;
 private ParseNodeFactory factory;
 private ParseContext.Stack contextStack = new ParseContext.Stack();
-
+
+private static BigDecimal MIN_LONG = new 
BigDecimal(Long.MIN_VALUE).negate();
+
 public void setParseNodeFactory(ParseNodeFactory factory) {
 this.factory = factory;
 }
@@ -932,8 +934,13 @@
 :   l=LONG {
 try {
 String lt = l.getText();
-Long v = Long.valueOf(lt.substring(0, lt.length() - 1));
-ret = factory.literal(v);
+lt = lt.substring(0, lt.length() - 1);
+BigDecimal bd = new BigDecimal(lt);
+if (MIN_LONG.equals(bd)) {
+ret = factory.literal(bd);
+} else {
+ret = factory.literal(Long.valueOf(lt));
+}
 } catch (NumberFormatException e) { // Shouldn't happen since we 
just parsed a number
 throwRecognitionException(l);
 }
{code}

> Support Long.MIN_VALUE for phoenix BIGINT type.
> ---
>
> Key: PHOENIX-1794
> URL: https://issues.apache.org/jira/browse/PHOENIX-1794
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dave Hacker
>Assignee: Dave Hacker
> Attachments: PHOENIX-1794.patch
>
>
> Currently Possible values for BIGINT type: -9223372036854775807 to 
> 9223372036854775807. 
> This is not fully inclusive of the set of all Long values in java, to do so 
> we need to support Long.MIN_VALUE = -9223372036854775808



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1287) Use the joni byte[] regex engine in place of j.u.regex

2015-04-01 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1287:

Attachment: add_varchar_to_performance_script.patch

[~jamestaylor], [~shuxi0ng] Attached add_varchar_to_performance_script.patch 
which adds VARCHAR column to performance.py script. 

FYI performance.py script usage:
performance.py  

> Use the joni byte[] regex engine in place of j.u.regex
> --
>
> Key: PHOENIX-1287
> URL: https://issues.apache.org/jira/browse/PHOENIX-1287
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Shuxiong Ye
>  Labels: gsoc2015
> Attachments: add_varchar_to_performance_script.patch
>
>
> See HBASE-11907. We'd get a 2x perf benefit plus it's driven off of byte[] 
> instead of strings.Thanks for the pointer, [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1748) Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on TIMESTAMP should not truncate value to the DATE only.

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391277#comment-14391277
 ] 

James Taylor commented on PHOENIX-1748:
---

[~tdsilva] - would you mind committing Dave's test? 
RoundFloorCeilFunctionsEnd2EndIT is probably a better home.

> Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on 
> TIMESTAMP should not truncate value to the DATE only.
> ---
>
> Key: PHOENIX-1748
> URL: https://issues.apache.org/jira/browse/PHOENIX-1748
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Dave Hacker
> Attachments: PHOENIX-1748.patch
>
>
> *Given* that input value is "-MM-DD HH:MM:SS.nnn" type of TIMESTAMP 
> (UNSIGNED_TIMESTAMP, DATE, TIME etc)
> *When* applying TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND
> *Than* result should be "-MM-DD HH:MM:SS.nnn" 
> But "-MM-DD" is returned instead.   
> Basically when I do TRUNC on timestamp I would expect it to be timestamp with 
> relevant parts truncated so for example I can GROUP BY on TRUNC 
> (timestamp,'HOUR') and have my hourly aggregation. 
> Here is test queries with cast(current_date() AS timestamp).
> {noformat}
>  SELECT
> dt
> ,TRUNC(dt,'DAY') AS trunc_day_from_dt
> ,TRUNC(dt,'HOUR') AS trunc_hour_from_dt
> ,TRUNC(dt,'MINUTE') AS trunc_min_from_dt
> ,TRUNC(dt,'SECOND') AS trunc_sec_from_dt
> ,TRUNC(dt,'MILLISECOND') AS trunc_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+-+
> | TO_TIMESTAMP('2015-03-08 09:09:11.665')  |  TRUNC_DAY_FROM_DT  | 
> TRUNC_HOUR_FROM_DT  |  TRUNC_MIN_FROM_DT  |  TRUNC_SEC_FROM_DT  |  
> TRUNC_MIL_FROM_DT  |
> +--+-+-+-+-+-+
> | 2015-03-08 09:09:11.665  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08  |
> +--+-+-+-+-+-+
> 1 row selected (0.066 seconds)
>  SELECT
> dt
> ,ROUND(dt,'DAY') AS round_day_from_d
> ,ROUND(dt,'HOUR') AS round_hour_from_d
> ,ROUND(dt,'MINUTE') AS round_min_from_d
> ,ROUND(dt,'SECOND') AS round_sec_from_d
> ,ROUND(dt,'MILLISECOND') AS round_mil_from_d
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+--+
> | TO_TIMESTAMP('2015-03-08 09:09:11.782')  |  ROUND_DAY_FROM_D   |  
> ROUND_HOUR_FROM_D  |  ROUND_MIN_FROM_D   |  ROUND_SEC_FROM_D   | 
> ROUND_MIL_FROM_D |
> +--+-+-+-+-+--+
> | 2015-03-08 09:09:11.782  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08 
> 09:09:11.782  |
> +--+-+-+-+-+--+
> 1 row selected (0.06 seconds)
>  SELECT
> dt
> ,FLOOR(dt,'DAY') AS floor_day_from_dt
> ,FLOOR(dt,'HOUR') AS floor_hour_from_dt
> ,FLOOR(dt,'MINUTE') AS floor_min_from_dt
> ,FLOOR(dt,'SECOND') AS floor_sec_from_dt
> ,FLOOR(dt,'MILLISECOND') AS floor_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+-+
> | TO_TIMESTAMP('2015-03-08 09:09:11.895')  |  FLOOR_DAY_FROM_DT  | 
> FLOOR_HOUR_FROM_DT  |  FLOOR_MIN_FROM_DT  |  FLOOR_SEC_FROM_DT  |  
> FLOOR_MIL_FROM_DT  |
> +--+-+-+-+-+-+
> | 2015-03-08 09:09:11.895  | 2015-03-08 

[jira] [Commented] (PHOENIX-1748) Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on TIMESTAMP should not truncate value to the DATE only.

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391273#comment-14391273
 ] 

James Taylor commented on PHOENIX-1748:
---

The only difference between a TIMESTAMP and a DATE in Phoenix is that TIMESTAMP 
stores 4 extra bytes for the nano part. If you TRUNC, ROUND, FLOOR, CEILING any 
TIMESTAMP, by definition you no longer have the nano part. We wouldn't want 
these to return a TIMESTAMP to carry around 4 extra zero bytes. So these work 
as designed.

Both sqlline and squirrel are other, different open source projects. Both 
support a way of showing the full granularity of the underlying DATE. I don't 
remember offhand how - ask on our or their mailing lists. This really has 
nothing to do with Phoenix.

> Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on 
> TIMESTAMP should not truncate value to the DATE only.
> ---
>
> Key: PHOENIX-1748
> URL: https://issues.apache.org/jira/browse/PHOENIX-1748
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Dave Hacker
> Attachments: PHOENIX-1748.patch
>
>
> *Given* that input value is "-MM-DD HH:MM:SS.nnn" type of TIMESTAMP 
> (UNSIGNED_TIMESTAMP, DATE, TIME etc)
> *When* applying TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND
> *Than* result should be "-MM-DD HH:MM:SS.nnn" 
> But "-MM-DD" is returned instead.   
> Basically when I do TRUNC on timestamp I would expect it to be timestamp with 
> relevant parts truncated so for example I can GROUP BY on TRUNC 
> (timestamp,'HOUR') and have my hourly aggregation. 
> Here is test queries with cast(current_date() AS timestamp).
> {noformat}
>  SELECT
> dt
> ,TRUNC(dt,'DAY') AS trunc_day_from_dt
> ,TRUNC(dt,'HOUR') AS trunc_hour_from_dt
> ,TRUNC(dt,'MINUTE') AS trunc_min_from_dt
> ,TRUNC(dt,'SECOND') AS trunc_sec_from_dt
> ,TRUNC(dt,'MILLISECOND') AS trunc_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+-+
> | TO_TIMESTAMP('2015-03-08 09:09:11.665')  |  TRUNC_DAY_FROM_DT  | 
> TRUNC_HOUR_FROM_DT  |  TRUNC_MIN_FROM_DT  |  TRUNC_SEC_FROM_DT  |  
> TRUNC_MIL_FROM_DT  |
> +--+-+-+-+-+-+
> | 2015-03-08 09:09:11.665  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08  |
> +--+-+-+-+-+-+
> 1 row selected (0.066 seconds)
>  SELECT
> dt
> ,ROUND(dt,'DAY') AS round_day_from_d
> ,ROUND(dt,'HOUR') AS round_hour_from_d
> ,ROUND(dt,'MINUTE') AS round_min_from_d
> ,ROUND(dt,'SECOND') AS round_sec_from_d
> ,ROUND(dt,'MILLISECOND') AS round_mil_from_d
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+--+
> | TO_TIMESTAMP('2015-03-08 09:09:11.782')  |  ROUND_DAY_FROM_D   |  
> ROUND_HOUR_FROM_D  |  ROUND_MIN_FROM_D   |  ROUND_SEC_FROM_D   | 
> ROUND_MIL_FROM_D |
> +--+-+-+-+-+--+
> | 2015-03-08 09:09:11.782  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08 
> 09:09:11.782  |
> +--+-+-+-+-+--+
> 1 row selected (0.06 seconds)
>  SELECT
> dt
> ,FLOOR(dt,'DAY') AS floor_day_from_dt
> ,FLOOR(dt,'HOUR') AS floor_hour_from_dt
> ,FLOOR(dt,'MINUTE') AS floor_min_from_dt
> ,FLOOR(dt,'SECOND') AS floor_sec_from_dt
> ,FLOOR(dt,'MILLISECOND') AS floor_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-

Re: Bound parameters in create table statements

2015-04-01 Thread James Taylor
For the SELECT example you use, Phoenix would infer the type based on
the column type of ID. We don't support parameterized precision/scale
create statements as you've shown. It's really only for the pre-split
information, and that's mainly because there's not a good way to pass
arbitrary byte[] values through constants - it's much easier to bind
them with stmt.setBytes(1, arbitraryByteArray);

I think it's a detail, though, that can be left for later IMHO. It's
not crucial functionality.

FWIW, Phoenix infers types whenever possible, but sometimes it's
ambiguous. For example:
SELECT ? + ? FROM T;
In theory, param 1 could be bound to a TIMESTAMP and param 2 to an
INTEGER. Or they could both be DECIMAL, etc. If Phoenix can't figure
it out, we use null for the type in the metadata APIs.

On Wed, Apr 1, 2015 at 11:49 AM, Nick Dimiduk  wrote:
> Adding back dev@calcite
>
> On Wed, Apr 1, 2015 at 11:48 AM, Nick Dimiduk  wrote:
>
>> Poking around with HSQLDB, it seems parameter metadata is made available
>> after statement preparation for select statements. (Presumably inferred
>> from column type, as in "SELECT * FROM TEST_TABLE WHERE id = ?". It does
>> not support parameterized create statements:
>>
>> user=> (.prepareStatement conn "CREATE TABLE TEST_TABLE_P(id INTEGER NOT
>> NULL, pk varchar(?) NOT NULL)")
>>
>> HsqlException unexpected token: ?  org.hsqldb.error.Error.parseError (:-1)
>>
>> I think that if Phoenix is going to support parameterized create table
>> statements, it should infer parameter types and populate ParameterMetaData
>> accordingly.
>>
>> On Tue, Mar 31, 2015 at 1:06 PM, Nick Dimiduk  wrote:
>>
>>> Hi Gabriel,
>>>
>>> Yes, we do this in the Phoenix test harness for parameterizing split
>>> points. See o.a.p.q.BaseTest#createTestTable(String, String, byte[][],
>>> Long, boolean). I ran into this while porting QueryIT to run vs. the query
>>> server.
>>>
>>> -n
>>>
>>> On Tue, Mar 31, 2015 at 11:58 AM, Gabriel Reid 
>>> wrote:
>>>
 Could you explain how you're using prepared statements for DDL
 statements?
 Are you parameterizing parts of the DDL statements with question marks to
 be filled in by the PreparedStatement parameters?

 On Tue, Mar 31, 2015 at 3:48 AM Nick Dimiduk  wrote:

 > Working on PHOENIX-971, I'm wondering what the expected behavior
 should be
 > for PreparedStatements created from CREATE TABLE sql with parameters.
 > Calcite's Avatica depends on the statement to identify the parameter
 types
 > at compile time, and return meaningful values for method invocations on
 > ParameterMetaData. It looks like Phoenix's CreateTableCompiler is
 > recognizing the number of parameters in my sql, but is not inferring
 type
 > information.
 >
 > My question is: should Avatica be more flexible in allowing "fuzzy"
 > signatures for PreparedStatements, or should Phoenix's
 > StatementPlan#compile methods be determining parameter types in all
 cases?
 >
 > Thanks,
 > Nick
 >

>>>
>>>
>>


[jira] [Commented] (PHOENIX-1748) Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on TIMESTAMP should not truncate value to the DATE only.

2015-04-01 Thread Dave Hacker (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391259#comment-14391259
 ] 

Dave Hacker commented on PHOENIX-1748:
--

[~jamestaylor] This may actually be an issue.  In RoundTimestampExpression, 
CeilTimestampExpression and FloorDateExpression we coerce TIMESTAMP to DATE. 
Climing that nanos have no affect.  This appears to change the return type of 
the expression.  The reason we see it working in the test is because we use 
ResultSet.getTimestamp() but the ResultSetMetaData.getColumnTypeName() is 
returning DATE instead of TIMESTAMP as Serhiy points out.  The question is is 
this supposed to happen or is it a bug.  If we change it to TIMESTAMP will we 
break others?

> Appling TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND on 
> TIMESTAMP should not truncate value to the DATE only.
> ---
>
> Key: PHOENIX-1748
> URL: https://issues.apache.org/jira/browse/PHOENIX-1748
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Dave Hacker
> Attachments: PHOENIX-1748.patch
>
>
> *Given* that input value is "-MM-DD HH:MM:SS.nnn" type of TIMESTAMP 
> (UNSIGNED_TIMESTAMP, DATE, TIME etc)
> *When* applying TRUNC|ROUND|FLOOR|CEIL with DAY|HOUR|MINUTE|SECOND|MILLISECOND
> *Than* result should be "-MM-DD HH:MM:SS.nnn" 
> But "-MM-DD" is returned instead.   
> Basically when I do TRUNC on timestamp I would expect it to be timestamp with 
> relevant parts truncated so for example I can GROUP BY on TRUNC 
> (timestamp,'HOUR') and have my hourly aggregation. 
> Here is test queries with cast(current_date() AS timestamp).
> {noformat}
>  SELECT
> dt
> ,TRUNC(dt,'DAY') AS trunc_day_from_dt
> ,TRUNC(dt,'HOUR') AS trunc_hour_from_dt
> ,TRUNC(dt,'MINUTE') AS trunc_min_from_dt
> ,TRUNC(dt,'SECOND') AS trunc_sec_from_dt
> ,TRUNC(dt,'MILLISECOND') AS trunc_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+-+
> | TO_TIMESTAMP('2015-03-08 09:09:11.665')  |  TRUNC_DAY_FROM_DT  | 
> TRUNC_HOUR_FROM_DT  |  TRUNC_MIN_FROM_DT  |  TRUNC_SEC_FROM_DT  |  
> TRUNC_MIL_FROM_DT  |
> +--+-+-+-+-+-+
> | 2015-03-08 09:09:11.665  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08  |
> +--+-+-+-+-+-+
> 1 row selected (0.066 seconds)
>  SELECT
> dt
> ,ROUND(dt,'DAY') AS round_day_from_d
> ,ROUND(dt,'HOUR') AS round_hour_from_d
> ,ROUND(dt,'MINUTE') AS round_min_from_d
> ,ROUND(dt,'SECOND') AS round_sec_from_d
> ,ROUND(dt,'MILLISECOND') AS round_mil_from_d
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+-+-+--+
> | TO_TIMESTAMP('2015-03-08 09:09:11.782')  |  ROUND_DAY_FROM_D   |  
> ROUND_HOUR_FROM_D  |  ROUND_MIN_FROM_D   |  ROUND_SEC_FROM_D   | 
> ROUND_MIL_FROM_D |
> +--+-+-+-+-+--+
> | 2015-03-08 09:09:11.782  | 2015-03-08  | 2015-03-08 
>  | 2015-03-08  | 2015-03-08  | 2015-03-08 
> 09:09:11.782  |
> +--+-+-+-+-+--+
> 1 row selected (0.06 seconds)
>  SELECT
> dt
> ,FLOOR(dt,'DAY') AS floor_day_from_dt
> ,FLOOR(dt,'HOUR') AS floor_hour_from_dt
> ,FLOOR(dt,'MINUTE') AS floor_min_from_dt
> ,FLOOR(dt,'SECOND') AS floor_sec_from_dt
> ,FLOOR(dt,'MILLISECOND') AS floor_mil_from_dt
>  FROM
> (SELECT current_date() AS d, cast(current_date() AS timestamp) AS dt, 
> TO_NUMBER(current_date()) e FROM system.catalog LIMIT 1) t;
> +--+-+-+--

[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391241#comment-14391241
 ] 

James Taylor commented on PHOENIX-1580:
---

One more thing is you don't need this check to wrap the final ResultIterator by 
a SequenceResultIterator, because the sequence allocation would occur in each 
child select statement and not at this level:
{code}
+if (context.getSequenceManager().getSequenceCount() > 0) {
+scanner = new SequenceResultIterator(scanner, 
context.getSequenceManager());
+}
{code}

[~ayingshu] - if you could fix the indenting issues, do this minor change, and 
make sure your patch is rebased to the latest, that would be helpful. Thanks.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391229#comment-14391229
 ] 

James Taylor commented on PHOENIX-1580:
---

Thanks for the patch, [~ayingshu]. I think it's pretty close at this point. 
It's probably easiest if [~maryannxue] adjusts your patch to deal with creating 
the proper outer select node structure. Here's a little bit of feedback.
- Looks like your indenting is off (3 spaces instead of 4 - see 
http://phoenix.apache.org/contributing.html#Code_conventions). This makes it 
more difficult to see where the actual changes are. 
{code}
 public QueryCompiler(PhoenixStatement statement, SelectStatement select, 
ColumnResolver resolver, List targetColumns, 
ParallelIteratorFactory parallelIteratorFactory, SequenceManager 
sequenceManager, boolean projectTuples) throws SQLException {
-this.statement = statement;
-this.select = select;
-this.resolver = resolver;
-this.scan = new Scan();
-this.targetColumns = targetColumns;
-this.parallelIteratorFactory = parallelIteratorFactory;
-this.sequenceManager = sequenceManager;
-this.projectTuples = projectTuples;
-this.useSortMergeJoin = 
select.getHint().hasHint(Hint.USE_SORT_MERGE_JOIN);
-this.noChildParentJoinOptimization = 
select.getHint().hasHint(Hint.NO_CHILD_PARENT_JOIN_OPTIMIZATION);
-if 
(statement.getConnection().getQueryServices().getLowestClusterHBaseVersion() >= 
PhoenixDatabaseMetaData.ESSENTIAL_FAMILY_VERSION_THRESHOLD) {
-this.scan.setAttribute(LOAD_COLUMN_FAMILIES_ON_DEMAND_ATTR, 
QueryConstants.TRUE);
-}
-if (select.getHint().hasHint(Hint.NO_CACHE)) {
-scan.setCacheBlocks(false);
-}
+   this.statement = statement;
+   this.select = select;
+   this.resolver = resolver;
+   this.scan = new Scan();
+   this.targetColumns = targetColumns;
+   this.parallelIteratorFactory = parallelIteratorFactory;
+   this.sequenceManager = sequenceManager;
+   this.projectTuples = projectTuples;
+   this.useSortMergeJoin = 
select.getHint().hasHint(Hint.USE_SORT_MERGE_JOIN);
+   this.noChildParentJoinOptimization = 
select.getHint().hasHint(Hint.NO_CHILD_PARENT_JOIN_OPTIMIZATION);
+   if 
(statement.getConnection().getQueryServices().getLowestClusterHBaseVersion() >= 
PhoenixDatabaseMetaData.ESSENTIAL_FAMILY_VERSION_THRESHOLD) {
+   this.scan.setAttribute(LOAD_COLUMN_FAMILIES_ON_DEMAND_ATTR, 
QueryConstants.TRUE);
+   }
+   if (select.getHint().hasHint(Hint.NO_CACHE)) {
+   scan.setCacheBlocks(false);
+   }
+
+   scan.setCaching(statement.getFetchSize());
+   this.originalScan = ScanUtil.newScan(scan);
+   if (!select.getSelects().isEmpty()) {
+   this.isUnionAll = true;
+   } else {
+   this.isUnionAll = false;
+   }
+}
{code}
- For UnionResultIterators, you've implemented getIterators() correctly so the 
merge sort should work now, but I think it'd be best to pass through the 
List and let it create the List (i.e. move 
the code you've already written in UnionPlan to UnionResultIterators). Just 
always create the UnionResultIterators in UnionPlan and in the else branch, 
just do a iterators.getIterators() to create the ConcatResultIterator. The 
reason is that your UnionResultIterators should also properly implement close() 
by calling close() on all the iterators, getScans() by combining the getScans() 
from all QueryPlans, getRanges() by combining all the getRanges() from all 
QueryPlans, size() by returning scans.size(), and explain() by calling 
explain() on each iterator(). Note that these scans and ranges are across 
different, multiple tables, so we'll need to see if/how these are used. I think 
at this point it's mainly just the size of each list that's used for display of 
the explain plan, so I think combining them will be ok (and give the user some 
feedback on how many scans and running for the union query.
- Also, some thought needs to go into the explain plan produced by UnionPlan. I 
think holding onto UnionResultIterators and calling explain() on it will get 
you part way there. The other part is making sure the ORDER BY and LIMIT info 
is displayed as expected.
- In your ParseNodeFactory.select(), I think you'll want to do something 
different for the outer select. It might not matter, as perhaps this will be 
take care of at compile time based on the unioned statements, but take care not 
to put extra work on the compiler. I think [~maryannxue] can take your patch 
and fix this part. I've made a few changes below.
{code}
+public SelectStatement select(List statements, 
List orderBy, LimitNode limit, int bindCount, boolean isAggregate) 
{
+boolean isUnion = statements.size() > 1;
+boolean hasSequence = false;
+fo

Re: Bound parameters in create table statements

2015-04-01 Thread Nick Dimiduk
Adding back dev@calcite

On Wed, Apr 1, 2015 at 11:48 AM, Nick Dimiduk  wrote:

> Poking around with HSQLDB, it seems parameter metadata is made available
> after statement preparation for select statements. (Presumably inferred
> from column type, as in "SELECT * FROM TEST_TABLE WHERE id = ?". It does
> not support parameterized create statements:
>
> user=> (.prepareStatement conn "CREATE TABLE TEST_TABLE_P(id INTEGER NOT
> NULL, pk varchar(?) NOT NULL)")
>
> HsqlException unexpected token: ?  org.hsqldb.error.Error.parseError (:-1)
>
> I think that if Phoenix is going to support parameterized create table
> statements, it should infer parameter types and populate ParameterMetaData
> accordingly.
>
> On Tue, Mar 31, 2015 at 1:06 PM, Nick Dimiduk  wrote:
>
>> Hi Gabriel,
>>
>> Yes, we do this in the Phoenix test harness for parameterizing split
>> points. See o.a.p.q.BaseTest#createTestTable(String, String, byte[][],
>> Long, boolean). I ran into this while porting QueryIT to run vs. the query
>> server.
>>
>> -n
>>
>> On Tue, Mar 31, 2015 at 11:58 AM, Gabriel Reid 
>> wrote:
>>
>>> Could you explain how you're using prepared statements for DDL
>>> statements?
>>> Are you parameterizing parts of the DDL statements with question marks to
>>> be filled in by the PreparedStatement parameters?
>>>
>>> On Tue, Mar 31, 2015 at 3:48 AM Nick Dimiduk  wrote:
>>>
>>> > Working on PHOENIX-971, I'm wondering what the expected behavior
>>> should be
>>> > for PreparedStatements created from CREATE TABLE sql with parameters.
>>> > Calcite's Avatica depends on the statement to identify the parameter
>>> types
>>> > at compile time, and return meaningful values for method invocations on
>>> > ParameterMetaData. It looks like Phoenix's CreateTableCompiler is
>>> > recognizing the number of parameters in my sql, but is not inferring
>>> type
>>> > information.
>>> >
>>> > My question is: should Avatica be more flexible in allowing "fuzzy"
>>> > signatures for PreparedStatements, or should Phoenix's
>>> > StatementPlan#compile methods be determining parameter types in all
>>> cases?
>>> >
>>> > Thanks,
>>> > Nick
>>> >
>>>
>>
>>
>


Re: Bound parameters in create table statements

2015-04-01 Thread Nick Dimiduk
Poking around with HSQLDB, it seems parameter metadata is made available
after statement preparation for select statements. (Presumably inferred
from column type, as in "SELECT * FROM TEST_TABLE WHERE id = ?". It does
not support parameterized create statements:

user=> (.prepareStatement conn "CREATE TABLE TEST_TABLE_P(id INTEGER NOT
NULL, pk varchar(?) NOT NULL)")

HsqlException unexpected token: ?  org.hsqldb.error.Error.parseError (:-1)

I think that if Phoenix is going to support parameterized create table
statements, it should infer parameter types and populate ParameterMetaData
accordingly.

On Tue, Mar 31, 2015 at 1:06 PM, Nick Dimiduk  wrote:

> Hi Gabriel,
>
> Yes, we do this in the Phoenix test harness for parameterizing split
> points. See o.a.p.q.BaseTest#createTestTable(String, String, byte[][],
> Long, boolean). I ran into this while porting QueryIT to run vs. the query
> server.
>
> -n
>
> On Tue, Mar 31, 2015 at 11:58 AM, Gabriel Reid 
> wrote:
>
>> Could you explain how you're using prepared statements for DDL statements?
>> Are you parameterizing parts of the DDL statements with question marks to
>> be filled in by the PreparedStatement parameters?
>>
>> On Tue, Mar 31, 2015 at 3:48 AM Nick Dimiduk  wrote:
>>
>> > Working on PHOENIX-971, I'm wondering what the expected behavior should
>> be
>> > for PreparedStatements created from CREATE TABLE sql with parameters.
>> > Calcite's Avatica depends on the statement to identify the parameter
>> types
>> > at compile time, and return meaningful values for method invocations on
>> > ParameterMetaData. It looks like Phoenix's CreateTableCompiler is
>> > recognizing the number of parameters in my sql, but is not inferring
>> type
>> > information.
>> >
>> > My question is: should Avatica be more flexible in allowing "fuzzy"
>> > signatures for PreparedStatements, or should Phoenix's
>> > StatementPlan#compile methods be determining parameter types in all
>> cases?
>> >
>> > Thanks,
>> > Nick
>> >
>>
>
>


[jira] [Resolved] (PHOENIX-1795) Set handlerCount, numQueues and maxQueueLength of index and metadata queues correctly

2015-04-01 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1795?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-1795.
-
Resolution: Fixed

> Set handlerCount, numQueues and maxQueueLength of index and metadata queues 
> correctly 
> --
>
> Key: PHOENIX-1795
> URL: https://issues.apache.org/jira/browse/PHOENIX-1795
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 5.0.0, 4.3.1, 4.4.0
>
> Attachments: PHOENIX-1795-4.3.1-v2.patch, PHOENIX-1795-4.3.1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1287) Use the joni byte[] regex engine in place of j.u.regex

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391069#comment-14391069
 ] 

James Taylor commented on PHOENIX-1287:
---

This is good work, [~shuxi0ng]. Try your performance test with a VARCHAR column 
that's not in the primary key constraint, as you might see different 
performance characteristics (as columns in the primary key constraint end up in 
the row key while other columns end up as KeyValues). I believe you can add a 
column to the CREATE TABLE call in performance.py. [~mujtabachohan] - any 
difficulty in doing that, and will the column automatically be populated? What 
controls the values with which it'll be populated?

[~shuxi0ng] - another option is to use our new Pherf tool - 
http://phoenix.apache.org/pherf.html, as this kind of performance comparison is 
exactly what it was designed for.

Please let me know if the pull request is up-to-date and I'll give it a review. 
Thanks so much for the excellent contributions.

> Use the joni byte[] regex engine in place of j.u.regex
> --
>
> Key: PHOENIX-1287
> URL: https://issues.apache.org/jira/browse/PHOENIX-1287
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Shuxiong Ye
>  Labels: gsoc2015
>
> See HBASE-11907. We'd get a 2x perf benefit plus it's driven off of byte[] 
> instead of strings.Thanks for the pointer, [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1071) Provide integration for exposing Phoenix tables as Spark RDDs

2015-04-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391048#comment-14391048
 ] 

ASF GitHub Bot commented on PHOENIX-1071:
-

Github user jmahonin commented on the pull request:

https://github.com/apache/phoenix/pull/59#issuecomment-88569122
  
I was able to spend a bit more time on the RelationProvider work. The DDL 
for custom providers doesn't work through the 'sql()' method on 
SparkSQLContext, perhaps by design or perhaps due to a bug, but the 'load()' 
method does work to create DataFrames using arbitrary data sources.

I'm still not entirely familiar with their API, and a lot of it is still 
very new and probably subject to churn, but I tried to base the it on existing 
examples in the Spark repo, such as the JDBC and Parquet data sources.

I've got a new commit on a side branch here:

https://github.com/FileTrek/phoenix/commit/16f4540ef0889fc6534c91b8638c16001114ba1a

If you're all OK with those changes going in on this PR, I can push them up 
here. Otherwise, I'll stash them aside for a new ticket.


> Provide integration for exposing Phoenix tables as Spark RDDs
> -
>
> Key: PHOENIX-1071
> URL: https://issues.apache.org/jira/browse/PHOENIX-1071
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Purtell
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a 
> "fault-tolerant collection of elements that can be operated on in parallel". 
> One can create a RDDs referencing a dataset in any external storage system 
> offering a Hadoop InputFormat, like PhoenixInputFormat and 
> PhoenixOutputFormat. There could be opportunities for additional interesting 
> and deep integration. 
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} 
> action, implicitly creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
> .select(c =>
> where(c.origin == "GT"))
> .countByKey()
> .foreach(r => println(r._1 + "=" + r._2))
> {code} 
> Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1071 Add phoenix-spark for Spark int...

2015-04-01 Thread jmahonin
Github user jmahonin commented on the pull request:

https://github.com/apache/phoenix/pull/59#issuecomment-88569122
  
I was able to spend a bit more time on the RelationProvider work. The DDL 
for custom providers doesn't work through the 'sql()' method on 
SparkSQLContext, perhaps by design or perhaps due to a bug, but the 'load()' 
method does work to create DataFrames using arbitrary data sources.

I'm still not entirely familiar with their API, and a lot of it is still 
very new and probably subject to churn, but I tried to base the it on existing 
examples in the Spark repo, such as the JDBC and Parquet data sources.

I've got a new commit on a side branch here:

https://github.com/FileTrek/phoenix/commit/16f4540ef0889fc6534c91b8638c16001114ba1a

If you're all OK with those changes going in on this PR, I can push them up 
here. Otherwise, I'll stash them aside for a new ticket.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391006#comment-14391006
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 5:22 PM:
--

[~jamestaylor] [~maryannxue] Thanks a lot for reviewing. Attached a patch 
addressed the issues pointed out by James (use static factory methods for 
constructing nodes, add a wrapper method for ResultIterators) and removed 
out-dated checks pointed out by Maryann. 


was (Author: aliciashu):
[~jamestaylor] [~maryannxue] Thanks a lot for reviewing. Attached a patch 
addressed the issues pointed out by James and removed out-dated checks pointed 
out by Maryann. 

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1781) Add Now()

2015-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391018#comment-14391018
 ] 

Hudson commented on PHOENIX-1781:
-

FAILURE: Integrated in Phoenix-master #650 (See 
[https://builds.apache.org/job/Phoenix-master/650/])
PHOENIX-1781 Add Now() (Alicia Ying Shu) (rajeshbabu: rev 
13d6296f7cab70e45a5fa9e579f81b2fa0dc03fd)
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/YearMonthSecondFunctionIT.java
* phoenix-core/src/main/java/org/apache/phoenix/expression/ExpressionType.java
* 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/NowFunction.java


> Add Now()
> -
>
> Key: PHOENIX-1781
> URL: https://issues.apache.org/jira/browse/PHOENIX-1781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1781-v1.patch, Phoenix-1781.patch
>
>
> Phoenix currently supports current_date() that returns a timestamp. 
> From Oracle doc:
> NOW() A timestamp value representing the current date and 
> time
> Many customers use Now() for current timestamp and curDate() for current 
> Date. Will implement Now() similar to Phoenix current_date() so that 
> customers do not need to change their queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-1580:
-
Attachment: (was: Phoenix-1580-v2.patch)

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-1580:
-
Attachment: Phoenix-1580-v2.patch

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391008#comment-14391008
 ] 

Devaraj Das commented on PHOENIX-1580:
--

[~aliciashu], the question is what's the right way of designing/implementing it 
long term that would also address longer term maintainability needs, 
scalability needs, and also extensibility (if in the future, we add more 
sophisticated queries can the current implementation be able to address it 
without a lot of rework). From the commentary so far on this ticket, I think we 
need to step back and take a look once at the current patch where it is lacking 
addressability of feedback and such.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391006#comment-14391006
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~jamestaylor] [~maryannxue] Thanks a lot for reviewing. Attached a patch 
addressed the issues pointed out by James and removed out-dated checks pointed 
out by Maryann. 

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alicia Ying Shu updated PHOENIX-1580:
-
Attachment: Phoenix-1580-v2.patch

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> Phoenix-1580-v2.patch, phoenix-1580-v1-wipe.patch, phoenix-1580.patch, 
> unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1781) Add Now()

2015-04-01 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-1781:
-
Fix Version/s: 4.4.0
   5.0.0

> Add Now()
> -
>
> Key: PHOENIX-1781
> URL: https://issues.apache.org/jira/browse/PHOENIX-1781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 5.0.0, 4.4.0
>
> Attachments: Phoenix-1781-v1.patch, Phoenix-1781.patch
>
>
> Phoenix currently supports current_date() that returns a timestamp. 
> From Oracle doc:
> NOW() A timestamp value representing the current date and 
> time
> Many customers use Now() for current timestamp and curDate() for current 
> Date. Will implement Now() similar to Phoenix current_date() so that 
> customers do not need to change their queries. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390990#comment-14390990
 ] 

Alicia Ying Shu commented on PHOENIX-1580:
--

[~maryannxue] Thanks for the help. In many database systems, UNION ALL is not 
represented as select * from dummy table (with Union All). That is why I think 
our current representation does not have much problems.  

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1287) Use the joni byte[] regex engine in place of j.u.regex

2015-04-01 Thread Shuxiong Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14385351#comment-14385351
 ] 

Shuxiong Ye edited comment on PHOENIX-1287 at 4/1/15 4:04 PM:
--

I set up environment using my laptop.

I use performance.py to generate 10m rows, and run the following queries, using 
ByteBased and StringBased regex, 5 times each.

{code}
Query # 6 - Like + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
DOMAIN LIKE '%o%e%';
Query # 7 - Replace + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
REGEXP_REPLACE(DOMAIN, '[a-z]+')='G.';
Query # 8 - Substr + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
REGEXP_SUBSTR(DOMAIN, '[a-z]+')='oogle';
{code}

|| || ByteBased || StringBased || SpeedUp(String/Byte) ||
| Like |  8.644/ 7.995/ 7.868/ 7.865/ 7.763 |  9.803/ 9.497/ 8.706/ 8.796/ 
8.805 | 1.136 |
| Replace | 11.725/11.071/11.199/10.988/10.970 | 
10.576/10.495/10.271/10.354/10.178 | 0.927 |
| Substr |  8.380/ 8.107/ 8.248/ 8.319/ 8.302 | 9.478/ 9.227/ 9.294/ 9.024/ 
9.158 | 1.116 |

Like and Substr have slightly speedup, while for Replace, Byte-Based 
implementation is slower than String-Based one. 

---

I finish RegexpSplitFunction. ByteBased seems to be a little faster than 
StringBased.
Query # 9 - Split + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
ARRAY_ELEM(REGEXP_SPLIT(DOMAIN, '\\.'), 1)='Google';

ByteBased: 12.245 StringByased: 12.842 SpeedUp(String/Byte): 1.05

The following queries are added in performance.py:
{code}
queryex("6 - Like + Count", "SELECT COUNT(1) FROM %s WHERE DOMAIN LIKE 
'%%o%%e%%';" % (table))
queryex("7 - Replace + Count", "SELECT COUNT(1) FROM %s WHERE 
REGEXP_REPLACE(DOMAIN, '[a-z]+')='G.';" % (table))
queryex("8 - Substr + Count", "SELECT COUNT(1) FROM %s WHERE 
REGEXP_SUBSTR(DOMAIN, '[a-z]+')='oogle';" % (table))
queryex("9 - Split + Count", "SELECT COUNT(1) FROM %s WHERE 
ARRAY_ELEM(REGEXP_SPLIT(DOMAIN, '.'), 1)='Google';" % (table) )
{code}


was (Author: shuxi0ng):
I set up environment using my laptop.

I use performance.py to generate 10m rows, and run the following queries, using 
ByteBased and StringBased regex, 5 times each.

{code}
Query # 6 - Like + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
DOMAIN LIKE '%o%e%';
Query # 7 - Replace + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
REGEXP_REPLACE(DOMAIN, '[a-z]+')='G.';
Query # 8 - Substr + Count - SELECT COUNT(1) FROM PERFORMANCE_1000 WHERE 
REGEXP_SUBSTR(DOMAIN, '[a-z]+')='oogle';
{code}

|| || ByteBased || StringBased || SpeedUp(String/Byte) ||
| Like |  8.644/ 7.995/ 7.868/ 7.865/ 7.763 |  9.803/ 9.497/ 8.706/ 8.796/ 
8.805 | 1.136 |
| Replace | 11.725/11.071/11.199/10.988/10.970 | 
10.576/10.495/10.271/10.354/10.178 | 0.927 |
| Substr |  8.380/ 8.107/ 8.248/ 8.319/ 8.302 | 9.478/ 9.227/ 9.294/ 9.024/ 
9.158 | 1.116 |

Like and Substr have slightly speedup, while for Replace, Byte-Based 
implementation is slower than String-Based one. 

> Use the joni byte[] regex engine in place of j.u.regex
> --
>
> Key: PHOENIX-1287
> URL: https://issues.apache.org/jira/browse/PHOENIX-1287
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Shuxiong Ye
>  Labels: gsoc2015
>
> See HBASE-11907. We'd get a 2x perf benefit plus it's driven off of byte[] 
> instead of strings.Thanks for the pointer, [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390765#comment-14390765
 ] 

Maryann Xue commented on PHOENIX-1580:
--

And didn't you by any chance find it weird yourself that the AST structure is 
inconsistent with the grammar defined in the lexer?
{code}
@@ -686,6 +695,36 @@ public class ParseNodeFactory {
 statement.hasSequence());
 }
 
+public SelectStatement select(List statements, 
List orderBy, LimitNode limit, int bindCount, boolean isAggregate) 
{
+boolean isUnion = statements.size() > 1;
+boolean hasSequence = false;
+for (int i = 0; !hasSequence && i < statements.size(); i++) {
+hasSequence = statements.get(i).hasSequence();
+}
+if (isUnion) {
+if (orderBy != null || limit != null) {
+// Push ORDER BY and LIMIT into sub selects and set 
isAggregate correctly
+for (int i = 0; i < statements.size(); i++) {
+SelectStatement statement = statements.get(i);
+statements.set(i, SelectStatement.create(statement, 
orderBy, limit, isAggregate || statement.isAggregate()));
+}
+}
+// Outer SELECT that does union will never be an aggregate
+isAggregate = false;
+List stmts = new ArrayList<>();
+for (int i= 1; i Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390667#comment-14390667
 ] 

Maryann Xue commented on PHOENIX-1580:
--

[~ayingshu] Having an outer select has NOTHING to do with the 
TupleProjectionPlan and is more logically and conceptually reasonable than 
using the first query as the outer select and keep the rest in a list. This is 
about how you would represent the query structure and is independent of how you 
would implement it.
And I do believe that with a little debugging and understanding ability, adding 
a special or null node as the FROM node of the outer query would not be a 
problem at all. 

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-04-01 Thread Dumindu Buddhika (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390631#comment-14390631
 ] 

Dumindu Buddhika commented on PHOENIX-1705:
---

Thanks for the pointers [~jamestaylor].
I have followed the guidelines.
There are two problems.

{code}
// If the base type of an element is fixed width, make sure the element being 
appended will fit
if (getBaseType().isFixedWidth() && getArrayExpr().getMaxLength() != 
null &&
getElementExpr().getMaxLength() != null && 
getElementExpr().getMaxLength() > getArrayExpr().getMaxLength()) {
throw new DataExceedsCapacityException("");
}
{code}
with this condition, the problem I mentioned above occurs. If the following 
query is run.
{code}SELECT region_name FROM regions WHERE 
ARRAY[2,3,4]=ARRAY_APPEND(ARRAY[2,3],4){code}
code throws a DataExceedsCapacityException. The reason is 
getElementExpr().getMaxLength()  returns 10 and getArrayExpr().getMaxLength() 
returns 4. 



{code}
SELECT ARRAY_APPEND(chars,NULL) FROM regions WHERE region_name = 'SF Bay Area'
{code}
When the above query is run, it fails at the coercion check at the constructor. 
The reason is, in this example elementType comes as "PVarbinary" so the 
coercion check fails(That's why I have left out "PVarbinary" in coercion check 
earlier to support this case) . But the expected behavior is this function call 
should returns the array itself. 

May be these kind of queries are not practically used. So should we ignore them?



> implement ARRAY_APPEND built in function
> 
>
> Key: PHOENIX-1705
> URL: https://issues.apache.org/jira/browse/PHOENIX-1705
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Dumindu Buddhika
>Assignee: Dumindu Buddhika
> Attachments: 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function10.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function2.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function3.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function4.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function5.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function6.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function7.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function8.patch, 
> PHOENIX-1705_implement_ARRAY_APPEND_built_in_function9.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


RE: [VOTE] Release of Apache Phoenix 4.3.1 RC0

2015-04-01 Thread Mark Tse
+1

Mark

-Original Message-
From: Samarth Jain [mailto:sama...@apache.org] 
Sent: March-31-15 7:38 PM
To: dev
Subject: [VOTE] Release of Apache Phoenix 4.3.1 RC0

Hello everyone,

This is a call for a vote on Apache Phoenix 4.3.1 RC0. This is a bug fix/patch 
release of Phoenix 4.3, compatible with the 0.98 branch of Apache HBase. The 
release includes both a source-only release and a convenience binary release.

For a complete list of changes, see:
https://raw.githubusercontent.com/apache/phoenix/4.3/CHANGES

The source tarball, including signatures, digests, etc can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.3.1-rc0/src/

The binary artifacts can be found at:
https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.3.1-rc0/bin/

Release artifacts are signed with the following key:
https://people.apache.org/keys/committer/mujtaba.asc

KEYS file available here:
https://dist.apache.org/repos/dist/release/phoenix/KEYS

The hash and tag to be voted upon:
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=79a810f9a253e6932351637b1fd218b07e2349bd
https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.3.1-rc0

Vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-01 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390236#comment-14390236
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Mujtaba,

Thanks for your reply and idea. *Jetty* is a Java HTTP (Web)
server and Java Servlet container. I'll check out JFreeChart library.
Currently I'm looking for chart libraries and comparing them with respect
to the requirements of the project.

Thanks.
Regards,
Nishani

On Tue, Mar 31, 2015 at 11:26 PM, Mujtaba Chohan (JIRA) 




-- 
Best Regards,
Ayola Jayamaha
http://ayolajayamaha.blogspot.com/


> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-04-01 Thread Ayola Jayamaha
Hi Mujtaba,

Thanks for your reply and idea. *Jetty* is a Java HTTP (Web)
server and Java Servlet container. I'll check out JFreeChart library.
Currently I'm looking for chart libraries and comparing them with respect
to the requirements of the project.

Thanks.
Regards,
Nishani

On Tue, Mar 31, 2015 at 11:26 PM, Mujtaba Chohan (JIRA) 
wrote:

>
> [
> https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14388970#comment-14388970
> ]
>
> Mujtaba Chohan commented on PHOENIX-1118:
> -
>
> [~nishani] One easy way would be to add a web page backed by jetty which
> shows visualization using JFreeChart. You then then add a script in
> phoenix/bin directory to launch that jetty server to show webpage with
> visualization all embedded in Phoenix.
>
> > Provide a tool for visualizing Phoenix tracing information
> > --
> >
> > Key: PHOENIX-1118
> > URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> > Project: Phoenix
> >  Issue Type: Sub-task
> >Reporter: James Taylor
> >Assignee: Nishani
> >  Labels: Java, SQL, Visualization, gsoc2015, mentor
> >
> > Currently there's no means of visualizing the trace information provided
> by Phoenix. We should provide some simple charting over our metrics tables.
> Take a look at the following JIRA for sample queries:
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>



-- 
Best Regards,
Ayola Jayamaha
http://ayolajayamaha.blogspot.com/


[jira] [Commented] (PHOENIX-1798) UnsupportedOperationException throws from BaseResultIterators.getIterators

2015-04-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390202#comment-14390202
 ] 

ASF GitHub Bot commented on PHOENIX-1798:
-

GitHub user StormAll opened a pull request:

https://github.com/apache/phoenix/pull/62

PHOENIX-1798 change Collections.empytList() to Lists.newArrayList()

This change related to PHOENIX-1798.
If the select SQL query a lot of data, the issue will occur.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StormAll/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/62.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #62


commit e0db28e0bbd4588d209b65ac8d9525349e84e919
Author: Cen Qi 
Date:   2015-04-01T07:55:29Z

PHOENIX-1798 change Collections.empytList() to Lists.newArrayList()




> UnsupportedOperationException throws from BaseResultIterators.getIterators
> --
>
> Key: PHOENIX-1798
> URL: https://issues.apache.org/jira/browse/PHOENIX-1798
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.3.0
>Reporter: Cen Qi
>   Original Estimate: 4h
>  Remaining Estimate: 4h
>
> If there throw an StaleRegionBoundaryCacheException, concatIterators will be 
> reassigned by Collections.emptyList(). And then call the add method again, it 
> will throw the UnsupportedOperationException.
> Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:589)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
> at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
> at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
> at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
> at PhoenixDemo.main(PhoenixDemo.java:12)
> Caused by: java.lang.UnsupportedOperationException
> at java.util.AbstractList.add(AbstractList.java:148)
> at java.util.AbstractList.add(AbstractList.java:108)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:535)
> ... 7 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1798 change Collections.empytList() ...

2015-04-01 Thread StormAll
GitHub user StormAll opened a pull request:

https://github.com/apache/phoenix/pull/62

PHOENIX-1798 change Collections.empytList() to Lists.newArrayList()

This change related to PHOENIX-1798.
If the select SQL query a lot of data, the issue will occur.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/StormAll/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/62.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #62


commit e0db28e0bbd4588d209b65ac8d9525349e84e919
Author: Cen Qi 
Date:   2015-04-01T07:55:29Z

PHOENIX-1798 change Collections.empytList() to Lists.newArrayList()




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390184#comment-14390184
 ] 

James Taylor commented on PHOENIX-1580:
---

bq. I have to say if we have taken my very early approach of passing down 
rowProjector, we can simplify the approach quite bit. 
I think what you meant to say was thank you so much [~maryannxue] for all your 
help in getting this UNION ALL implementation closer to something that actually 
works and will scale beyond a few thousand rows.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 8:00 AM:
---

[~ayingshu] - I think you forgot to attach another patch.


was (Author: jamestaylor):
[~ayingshu] - are you aware that every JIRA edit generates another email to the 
dev list?

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 7:56 AM:
---

[~ayingshu] - are you aware that every JIRA edit generates another email to the 
dev list?


was (Author: jamestaylor):
[~ayingshu] - are you aware that every JIRA edit generates another email to the

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 7:56 AM:
---

[~ayingshu] - are you aware that every JIRA edit generates another email to the


was (Author: jamestaylor):
[~ayingshu] - are you aware that every JIRA edit generates another

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 7:55 AM:
---

[~ayingshu] - are you aware that every JIRA edit


was (Author: jamestaylor):
[~ayingshu] - are you aware that

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 7:55 AM:
---

[~ayingshu] - are you aware that every JIRA edit generates another


was (Author: jamestaylor):
[~ayingshu] - are you aware that every JIRA edit

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor edited comment on PHOENIX-1580 at 4/1/15 7:54 AM:
---

[~ayingshu] - are you aware that


was (Author: jamestaylor):
[~ayingshu] - are you

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390178#comment-14390178
 ] 

James Taylor commented on PHOENIX-1580:
---

[~ayingshu] - are you

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390020#comment-14390020
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 7:28 AM:
--

[~jamestaylor] [~maryannxue]  The temp schema is to work around Aggregate 
projection to replace rowProjector. OrderBy expression is normally column name, 
it should be the same no matter in the server or in the client. The temp schema 
is already built with the projection names (column names) as specified in the 
SQL. All UnionAllIT tests passed with Plan compiled Order By and Limit. 

Wrapping another Select over current Union All selects we need to specify a 
table. A dummy table or Null table did not work since parser needs a real table 
to resolve column information etc. I have to say if we have taken my very early 
approach of passing down rowProjector, we can simplify the approach quite bit. 
Given that column names are used to construct the temp schema, no need to wrap 
another select over. 



was (Author: aliciashu):
[~jamestaylor] [~maryannxue]  The temp schema is to work around Aggregate 
projection to replace rowProjector. OrderBy expression is normally column name, 
it should be the same no matter in the server or in the client. It applies to 
alias as well. The temp schema is already built with the projection names 
(column names) as specified in the SQL. I did not see an issue here. All 
UnionAllIT tests passed with Plan compiled Order By and Limit. 

Wrapping another Select over current Union All selects we need to specify a 
table. A dummy table or Null table did not work since parser needs a real table 
to resolve column information etc. I have to say if we have taken my very early 
approach of passing down rowProjector, we can simplify the approach quite bit. 
Given that column names are used to construct the temp schema, no need to wrap 
another select over. 


> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1580) Support UNION ALL

2015-04-01 Thread Alicia Ying Shu (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14390020#comment-14390020
 ] 

Alicia Ying Shu edited comment on PHOENIX-1580 at 4/1/15 7:15 AM:
--

[~jamestaylor] [~maryannxue]  The temp schema is to work around Aggregate 
projection to replace rowProjector. OrderBy expression is normally column name, 
it should be the same no matter in the server or in the client. It applies to 
alias as well. The temp schema is already built with the projection names 
(column names) as specified in the SQL. I did not see an issue here. All 
UnionAllIT tests passed with Plan compiled Order By and Limit. 

Wrapping another Select over current Union All selects we need to specify a 
table. A dummy table or Null table did not work since parser needs a real table 
to resolve column information etc. I have to say if we have taken my very early 
approach of passing down rowProjector, we can simplify the approach quite bit. 
Given that column names are used to construct the temp schema, no need to wrap 
another select over. 



was (Author: aliciashu):
[~jamestaylor] [~maryannxue]  The temp schema is to work around Aggregate 
projection to replace rowProjector. OrderBy expression is normally column name, 
it should be the same no matter in the server or in the client. It applies to 
alias as well. I did not see an issue here. All UnionAllIT tests passed with 
Plan compiled Order By and Limit. 

Wrapping another Select over current Union All selects we need to specify a 
table. A dummy table or Null table did not work since parser needs a real table 
to resolve column information etc. I have to say if we have taken my very early 
approach of passing down rowProjector, we can simplify the approach quite bit. 
But I think it is ok for now since column name should be the same for the 
server side or the client side which should be taken from the submitted SQL. No 
need to wrap another select over. 


> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: PHOENIX-1580-grammar.patch, Phoenix-1580-v1.patch, 
> phoenix-1580-v1-wipe.patch, phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1798) UnsupportedOperationException throws from BaseResultIterators.getIterators

2015-04-01 Thread Cen Qi (JIRA)
Cen Qi created PHOENIX-1798:
---

 Summary: UnsupportedOperationException throws from 
BaseResultIterators.getIterators
 Key: PHOENIX-1798
 URL: https://issues.apache.org/jira/browse/PHOENIX-1798
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.3.0
Reporter: Cen Qi


If there throw an StaleRegionBoundaryCacheException, concatIterators will be 
reassigned by Collections.emptyList(). And then call the add method again, it 
will throw the UnsupportedOperationException.

Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:589)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:764)
at PhoenixDemo.main(PhoenixDemo.java:12)
Caused by: java.lang.UnsupportedOperationException
at java.util.AbstractList.add(AbstractList.java:148)
at java.util.AbstractList.add(AbstractList.java:108)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:535)
... 7 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)