[jira] [Commented] (PHOENIX-1746) Pass through guidepost config params on UPDATE STATISTICS call

2015-03-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375463#comment-14375463
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-1746:
-

LGTM.

> Pass through guidepost config params on UPDATE STATISTICS call
> --
>
> Key: PHOENIX-1746
> URL: https://issues.apache.org/jira/browse/PHOENIX-1746
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: 4.3.1
> Attachments: PHOENIX-1746.patch
>
>
> We should pass through the properties from the client-side that drive the 
> guidepost width when UPDATE STATISTICS is manually invoked. That'll allow 
> easier experimentation without requiring a config change plus rolling restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1750) Some build-in functions used in expression surface internal implementation as column alias what cause GROUP BY to fail

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1750:
--
Assignee: Samarth Jain

> Some build-in functions used in expression surface internal implementation as 
> column alias what cause GROUP BY to fail
> --
>
> Key: PHOENIX-1750
> URL: https://issues.apache.org/jira/browse/PHOENIX-1750
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Samarth Jain
>
> Consider query
> {noformat}
> DROP TABLE IF EXISTS t1;
> CREATE TABLE t1 (ts TIMESTAMP not null primary key);
> UPSERT INTO t1 VALUES(to_date('2015-03-17 03:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 03:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 03:15:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-16 04:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 05:25:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 05:35:45.000'));
> SELECT * FROM t1;
> ++
> |   TS   |
> ++
> | 2015-03-16 04:05:45.0  |
> | 2015-03-17 03:05:45.0  |
> | 2015-03-18 03:05:45.0  |
> | 2015-03-18 03:15:45.0  |
> | 2015-03-18 05:25:45.0  |
> | 2015-03-18 05:35:45.0  |
> ++
> select cast(trunc(ts,'HOUR') AS TIMESTAMP), count(*) from t1 group by 
> cast(trunc(ts,'HOUR') AS TIMESTAMP);
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
>  select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by cast(trunc(ts,'HOUR') AS TIMESTAMP);
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
> select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by dt;
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
> {noformat}
> but than by accident I run
> {noformat}
>  select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by trunc(ts,'HOUR');
> ++--+
> |   DT   | CNT  |
> ++--+
> | 2015-03-16 04:00:00.0  | 1|
> | 2015-03-17 03:00:00.0  | 1|
> | 2015-03-18 03:00:00.0  | 2|
> | 2015-03-18 05:00:00.0  | 2|
> ++--+
> {noformat}
> So I am not sure how to properly phrase it but still decided to create JIRA 
> since there is definitely something going on there :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1750) Some build-in functions used in expression surface internal implementation as column alias what cause GROUP BY to fail

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375423#comment-14375423
 ] 

James Taylor commented on PHOENIX-1750:
---

[~samarthjain] - would you mind investigating, as you're pretty familiar with 
that code? Depending on what you find out, we may want to include a fix for 
this in 4.3.1.

> Some build-in functions used in expression surface internal implementation as 
> column alias what cause GROUP BY to fail
> --
>
> Key: PHOENIX-1750
> URL: https://issues.apache.org/jira/browse/PHOENIX-1750
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Samarth Jain
>
> Consider query
> {noformat}
> DROP TABLE IF EXISTS t1;
> CREATE TABLE t1 (ts TIMESTAMP not null primary key);
> UPSERT INTO t1 VALUES(to_date('2015-03-17 03:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 03:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 03:15:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-16 04:05:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 05:25:45.000'));
> UPSERT INTO t1 VALUES(to_date('2015-03-18 05:35:45.000'));
> SELECT * FROM t1;
> ++
> |   TS   |
> ++
> | 2015-03-16 04:05:45.0  |
> | 2015-03-17 03:05:45.0  |
> | 2015-03-18 03:05:45.0  |
> | 2015-03-18 03:15:45.0  |
> | 2015-03-18 05:25:45.0  |
> | 2015-03-18 05:35:45.0  |
> ++
> select cast(trunc(ts,'HOUR') AS TIMESTAMP), count(*) from t1 group by 
> cast(trunc(ts,'HOUR') AS TIMESTAMP);
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
>  select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by cast(trunc(ts,'HOUR') AS TIMESTAMP);
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
> select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by dt;
> Error: ERROR 1018 (42Y27): Aggregate may not contain columns not in GROUP BY. 
> TO_TIMESTAMP(FLOOR(TO_DATE(TS))) (state=42Y27,code=1018)
> {noformat}
> but than by accident I run
> {noformat}
>  select cast(trunc(ts,'HOUR') AS TIMESTAMP) AS dt, count(*) AS cnt from t1 
> group by trunc(ts,'HOUR');
> ++--+
> |   DT   | CNT  |
> ++--+
> | 2015-03-16 04:00:00.0  | 1|
> | 2015-03-17 03:00:00.0  | 1|
> | 2015-03-18 03:00:00.0  | 2|
> | 2015-03-18 05:00:00.0  | 2|
> ++--+
> {noformat}
> So I am not sure how to properly phrase it but still decided to create JIRA 
> since there is definitely something going on there :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1746) Pass through guidepost config params on UPDATE STATISTICS call

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1746:
--
Attachment: PHOENIX-1746.patch

[~samarthjain] - please review.

> Pass through guidepost config params on UPDATE STATISTICS call
> --
>
> Key: PHOENIX-1746
> URL: https://issues.apache.org/jira/browse/PHOENIX-1746
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: 4.3.1
> Attachments: PHOENIX-1746.patch
>
>
> We should pass through the properties from the client-side that drive the 
> guidepost width when UPDATE STATISTICS is manually invoked. That'll allow 
> easier experimentation without requiring a config change plus rolling restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Comment: was deleted

(was: Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com)

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishani  updated PHOENIX-1118:
--
Comment: was deleted

(was: Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com)

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375376#comment-14375376
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375377#comment-14375377
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375374#comment-14375374
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375373#comment-14375373
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1118) Provide a tool for visualizing Phoenix tracing information

2015-03-22 Thread Nishani (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375375#comment-14375375
 ] 

Nishani  commented on PHOENIX-1118:
---

Hi Rajeshbabu,

Thanks for the reply. It helps clarify the matter which I was unaware since I'm 
new to Apache Phoenix.

Thanks.

BR,
Nishani
http://ayolajayamaha.blogspot.com

> Provide a tool for visualizing Phoenix tracing information
> --
>
> Key: PHOENIX-1118
> URL: https://issues.apache.org/jira/browse/PHOENIX-1118
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Nishani 
>  Labels: Java, SQL, Visualization, gsoc2015, mentor
>
> Currently there's no means of visualizing the trace information provided by 
> Phoenix. We should provide some simple charting over our metrics tables. Take 
> a look at the following JIRA for sample queries: 
> https://issues.apache.org/jira/browse/PHOENIX-1115?focusedCommentId=14323151&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14323151



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1768) GROUP BY should support column position as well as column alias

2015-03-22 Thread Serhiy Bilousov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375319#comment-14375319
 ] 

Serhiy Bilousov commented on PHOENIX-1768:
--

[~jamestaylor] I think it would be helpful if we align on what is the vision 
here:

a) SQL standard (and if so what standard exactly)
b) specific database (PostgreSQL, Oracle, MSSQL, MySQL) 
c) make a best of all world (pick the best and make it work in Phoenix)

If you ask me I would go with c). This particular JIRA was filed mostly because 
I have an issue when I can not really use expression on GROUP by because it 
changes see: [PHOENIX-1750|https://issues.apache.org/jira/browse/PHOENIX-1750] 
+ it would make it close to PostgreSQL.

Thank you

> GROUP BY should support column position as well as column alias
> ---
>
> Key: PHOENIX-1768
> URL: https://issues.apache.org/jira/browse/PHOENIX-1768
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the GROUP BY but column number (position in SELECT part) as well as column 
> alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> GROUP BY 1, 2
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1611) Support ABS function

2015-03-22 Thread renmin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375279#comment-14375279
 ] 

renmin commented on PHOENIX-1611:
-

Perhaps SeonghwanMoon is busy doing other things.
Thanks.

> Support ABS function 
> -
>
> Key: PHOENIX-1611
> URL: https://issues.apache.org/jira/browse/PHOENIX-1611
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0
> Environment: Support ABS Function. 
>Reporter: SeonghwanMoon
>Assignee: renmin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1611) Support ABS function

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1611:
--
Assignee: renmin  (was: SeonghwanMoon)

> Support ABS function 
> -
>
> Key: PHOENIX-1611
> URL: https://issues.apache.org/jira/browse/PHOENIX-1611
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0
> Environment: Support ABS Function. 
>Reporter: SeonghwanMoon
>Assignee: renmin
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1611) Support ABS function

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375178#comment-14375178
 ] 

James Taylor commented on PHOENIX-1611:
---

Sure, [~liurm] - looks like [~Seonghwan] is off doing something else now, so go 
for it.

> Support ABS function 
> -
>
> Key: PHOENIX-1611
> URL: https://issues.apache.org/jira/browse/PHOENIX-1611
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0
> Environment: Support ABS Function. 
>Reporter: SeonghwanMoon
>Assignee: SeonghwanMoon
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375176#comment-14375176
 ] 

James Taylor commented on PHOENIX-1722:
---

If [~gabriel.reid] is good with the pull request, [~samarthjain] this would be 
fine for 4.3.1 too IMHO.

> Speedup CONVERT_TZ function
> ---
>
> Key: PHOENIX-1722
> URL: https://issues.apache.org/jira/browse/PHOENIX-1722
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Vaclav Loffelmann
>Assignee: Vaclav Loffelmann
>Priority: Minor
>
> We have use case sensitive to performance of this function and I'd like to 
> benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-385) Support negative literal directly

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375175#comment-14375175
 ] 

James Taylor edited comment on PHOENIX-385 at 3/22/15 9:17 PM:
---

[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the \-1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like \-1*1. A reference to a 
negative literal end up flowing this this code and turn it into \-1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3\-1, 3\-\-1. I'm sure there's 
an easy solution, but never had the time to figure it out.


was (Author: jamestaylor):
[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the \-1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-385) Support negative literal directly

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375175#comment-14375175
 ] 

James Taylor edited comment on PHOENIX-385 at 3/22/15 9:17 PM:
---

[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the \-1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.


was (Author: jamestaylor):
[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the --1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-385) Support negative literal directly

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375175#comment-14375175
 ] 

James Taylor edited comment on PHOENIX-385 at 3/22/15 9:16 PM:
---

[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the --1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.


was (Author: jamestaylor):
[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the -1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-385) Support negative literal directly

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375175#comment-14375175
 ] 

James Taylor commented on PHOENIX-385:
--

[~dhacker1341] - can you take a look at this one? It's likely the root cause of 
the -1.00 issue you mentioned last week. See ParseNodeFactory.negate(ParseNode 
child) and the hack to recognize expressions like -1*1. A reference to a 
negative literal end up flowing this this code and turn it into -1*xxx. If we 
recognize negative literals directly, we could remove that hack. The kind of 
expressions that become problematic are SELECT 3-1, 3--1. I'm sure there's an 
easy solution, but never had the time to figure it out.

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-385) Support negative literal directly

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-385:
-
Assignee: Dave Hacker

> Support negative literal directly
> -
>
> Key: PHOENIX-385
> URL: https://issues.apache.org/jira/browse/PHOENIX-385
> Project: Phoenix
>  Issue Type: Task
>Reporter: Raymond Liu
>Assignee: Dave Hacker
>
> say in split on / Limit etc who use literal as input
> Negative value should be supported, at present, only positive number is 
> supported. Might not be true for Limit, but for other cases like split on 
> might need it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1744) CAST from UNSIGNED_LONG (_INT) to * TIMESTAMP is not supported.

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375169#comment-14375169
 ] 

James Taylor commented on PHOENIX-1744:
---

+1. [~samarthjain] - please check in to 4.3, 4.x-HBase-0.98, 4.x-HBase-1.x, and 
master branches on behalf of Dave.

> CAST from UNSIGNED_LONG (_INT) to * TIMESTAMP is not supported.
> ---
>
> Key: PHOENIX-1744
> URL: https://issues.apache.org/jira/browse/PHOENIX-1744
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Dave Hacker
>Priority: Minor
>  Labels: 4.3.1
>
> Epoch time can be represented as INTEGER (up to the seconds) or LONG (up to 
> the millisecond). Currently CAST from UNSIGNED_LONG to TIMESTAMP not 
> supported by Phoenix. 
> It make sense to have support for conversion from epoch (4 bytes or 8 bytes) 
> to any datetime like format curently supported by Phoenix (TIME, DATE, 
> TIMESTAMP, UNSIGNED_TIME, UNSIGNED_DATE, UNSIGNED_TIMESTAMP).
> HBase shell:
> {noformat}
> create 't','f1'
> put 
> 't',"\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x01L\x0Fz,\x1E",'f1:c1','test'
> {noformat}
> sqlline:
> {noformat}
> CREATE VIEW vT
> (   a UNSIGNED_INT NOT NULL
>,b UNSIGNED_INT NOT NULL
>,ts UNSIGNED_LONG NOT NULL
> CONSTRAINT pk PRIMARY KEY (a, b, ts))
> AS SELECT * FROM "t"
> DEFAULT_COLUMN_FAMILY ='f1';
>  select a, b, ts, CAST(1426188807198 AS TIMESTAMP) from vt;
> ++++--+
> | A  | B  |   TS   | TO_TIMESTAMP(1426188807198)  |
> ++++--+
> | 1  | 1  | 1426188807198  | 2015-03-12 19:33:27.198  |
> ++++--+
> {noformat}
> but
> {noformat}
> select a, b, ts, CAST(ts AS TIMESTAMP) from vt;
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_LONG and TIMESTAMP for TS 
> (state=22005,code=203)
> {noformat}
> As per Gabriel Reid
> {quote}
> As a workaround, you can cast the UNSIGNED_LONG to a BIGINT first, and then 
> cast it to a TIMESTAMP, i.e.
> select a, b, ts, CAST(CAST(ts AS BIGINT) AS TIMESTAMP) from vt;
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1744) CAST from UNSIGNED_LONG (_INT) to * TIMESTAMP is not supported.

2015-03-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1744?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375168#comment-14375168
 ] 

ASF GitHub Bot commented on PHOENIX-1744:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/52#discussion_r26907122
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ToDateFunctionIT.java ---
@@ -176,4 +177,60 @@ public void 
testToDate_CustomTimeZoneViaQueryServicesAndCustomFormat() throws SQ
 callToDateFunction(
 customTimeZoneConn, "TO_DATE('1970-01-01', 
'-MM-dd')").getTime());
 }
+
+@Test
+public void testTimestampCast() throws SQLException {
+Properties props = new Properties();
+props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
"GMT+1");
+Connection customTimeZoneConn = 
DriverManager.getConnection(getUrl(), props);
+
+assertEquals(
+1426188807198L,
+callToDateFunction(
+customTimeZoneConn, "CAST(1426188807198 AS 
TIMESTAMP)").getTime());
+
+
+try {
+callToDateFunction(
+customTimeZoneConn, "CAST(22005 AS TIMESTAMP)");
+fail();
+} catch (TypeMismatchException e) {
+
+}
+}
+
+@Test
+public void testUnsignedLongToTimestampCast() throws SQLException {
+Properties props = new Properties();
+props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
"GMT+1");
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+try {
+conn.prepareStatement(
+"create table TT("
++ "a unsigned_int not null, "
++ "b unsigned_int not null, "
++ "ts unsigned_long not null "
++ "constraint PK primary key (a, b, 
ts))").execute();
+conn.commit();
--- End diff --

FYI, no need for commit after DDL statements.


> CAST from UNSIGNED_LONG (_INT) to * TIMESTAMP is not supported.
> ---
>
> Key: PHOENIX-1744
> URL: https://issues.apache.org/jira/browse/PHOENIX-1744
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>Assignee: Dave Hacker
>Priority: Minor
>  Labels: 4.3.1
>
> Epoch time can be represented as INTEGER (up to the seconds) or LONG (up to 
> the millisecond). Currently CAST from UNSIGNED_LONG to TIMESTAMP not 
> supported by Phoenix. 
> It make sense to have support for conversion from epoch (4 bytes or 8 bytes) 
> to any datetime like format curently supported by Phoenix (TIME, DATE, 
> TIMESTAMP, UNSIGNED_TIME, UNSIGNED_DATE, UNSIGNED_TIMESTAMP).
> HBase shell:
> {noformat}
> create 't','f1'
> put 
> 't',"\x00\x00\x00\x01\x00\x00\x00\x01\x00\x00\x01L\x0Fz,\x1E",'f1:c1','test'
> {noformat}
> sqlline:
> {noformat}
> CREATE VIEW vT
> (   a UNSIGNED_INT NOT NULL
>,b UNSIGNED_INT NOT NULL
>,ts UNSIGNED_LONG NOT NULL
> CONSTRAINT pk PRIMARY KEY (a, b, ts))
> AS SELECT * FROM "t"
> DEFAULT_COLUMN_FAMILY ='f1';
>  select a, b, ts, CAST(1426188807198 AS TIMESTAMP) from vt;
> ++++--+
> | A  | B  |   TS   | TO_TIMESTAMP(1426188807198)  |
> ++++--+
> | 1  | 1  | 1426188807198  | 2015-03-12 19:33:27.198  |
> ++++--+
> {noformat}
> but
> {noformat}
> select a, b, ts, CAST(ts AS TIMESTAMP) from vt;
> Error: ERROR 203 (22005): Type mismatch. UNSIGNED_LONG and TIMESTAMP for TS 
> (state=22005,code=203)
> {noformat}
> As per Gabriel Reid
> {quote}
> As a workaround, you can cast the UNSIGNED_LONG to a BIGINT first, and then 
> cast it to a TIMESTAMP, i.e.
> select a, b, ts, CAST(CAST(ts AS BIGINT) AS TIMESTAMP) from vt;
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1744 unsigned long cast to timestamp

2015-03-22 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/52#discussion_r26907122
  
--- Diff: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ToDateFunctionIT.java ---
@@ -176,4 +177,60 @@ public void 
testToDate_CustomTimeZoneViaQueryServicesAndCustomFormat() throws SQ
 callToDateFunction(
 customTimeZoneConn, "TO_DATE('1970-01-01', 
'-MM-dd')").getTime());
 }
+
+@Test
+public void testTimestampCast() throws SQLException {
+Properties props = new Properties();
+props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
"GMT+1");
+Connection customTimeZoneConn = 
DriverManager.getConnection(getUrl(), props);
+
+assertEquals(
+1426188807198L,
+callToDateFunction(
+customTimeZoneConn, "CAST(1426188807198 AS 
TIMESTAMP)").getTime());
+
+
+try {
+callToDateFunction(
+customTimeZoneConn, "CAST(22005 AS TIMESTAMP)");
+fail();
+} catch (TypeMismatchException e) {
+
+}
+}
+
+@Test
+public void testUnsignedLongToTimestampCast() throws SQLException {
+Properties props = new Properties();
+props.setProperty(QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB, 
"GMT+1");
+Connection conn = DriverManager.getConnection(getUrl(), props);
+conn.setAutoCommit(false);
+try {
+conn.prepareStatement(
+"create table TT("
++ "a unsigned_int not null, "
++ "b unsigned_int not null, "
++ "ts unsigned_long not null "
++ "constraint PK primary key (a, b, 
ts))").execute();
+conn.commit();
--- End diff --

FYI, no need for commit after DDL statements.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1580) Support UNION ALL

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375144#comment-14375144
 ] 

James Taylor commented on PHOENIX-1580:
---

See PHOENIX-1749 as well. Support this would be pretty easy and allow a good 
way for the ORDER BY to be specified when doing a union. We should be careful 
to push down the ORDER BY and LIMIT into the sub-plan of each statement being 
unioned so all the client needs to do is a merge sort.

> Support UNION ALL
> -
>
> Key: PHOENIX-1580
> URL: https://issues.apache.org/jira/browse/PHOENIX-1580
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Attachments: phoenix-1580.patch, unionall-wipe.patch
>
>
> Select * from T1
> UNION ALL
> Select * from T2



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1768) GROUP BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375140#comment-14375140
 ] 

James Taylor commented on PHOENIX-1768:
---

[~julianhyde] - is specifying a column position in the GROUP BY part of the 
ANSI SQL standard?

> GROUP BY should support column position as well as column alias
> ---
>
> Key: PHOENIX-1768
> URL: https://issues.apache.org/jira/browse/PHOENIX-1768
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the GROUP BY but column number (position in SELECT part) as well as column 
> alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> GROUP BY 1, 2
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1749) ORDER BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1749:
--
Summary: ORDER BY should support column position as well as column alias  
(was: ORDER BY and GROUP BY should support column position as well as column 
alias)

> ORDER BY should support column position as well as column alias
> ---
>
> Key: PHOENIX-1749
> URL: https://issues.apache.org/jira/browse/PHOENIX-1749
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the ORDER BY and GROUP BY but column number (position in SELECT part) as well 
> as column alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> ORDER BY 1 ASC, 2 DESC
> ORDER BY date_truncated 
> and same for GROUP_BY
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1749) ORDER BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1749:
--
Description: 
In postgreSQL (and many others DBs) you can specify not only column name for 
the ORDER BY but column number (position in SELECT part) as well as column 
alias.

see:
http://www.postgresql.org/docs/9.4/static/queries-order.html
http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY

Adding such support would be very helpful and sometimes necessary.

I can provide real queries example if required but basically we want something 
like this

given query
SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
we want 
ORDER BY 1 ASC, 2 DESC
ORDER BY date_truncated 

Having just column number would cover both but having column alias would make 
queries more readable and human friendly. Plus make it one little stem closer 
to postgreSQL and SQL standard.

  was:
In postgreSQL (and many others DBs) you can specify not only column name for 
the ORDER BY and GROUP BY but column number (position in SELECT part) as well 
as column alias.

see:
http://www.postgresql.org/docs/9.4/static/queries-order.html
http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY

Adding such support would be very helpful and sometimes necessary.

I can provide real queries example if required but basically we want something 
like this

given query
SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
we want 
ORDER BY 1 ASC, 2 DESC
ORDER BY date_truncated 
and same for GROUP_BY

Having just column number would cover both but having column alias would make 
queries more readable and human friendly. Plus make it one little stem closer 
to postgreSQL and SQL standard.


> ORDER BY should support column position as well as column alias
> ---
>
> Key: PHOENIX-1749
> URL: https://issues.apache.org/jira/browse/PHOENIX-1749
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the ORDER BY but column number (position in SELECT part) as well as column 
> alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> ORDER BY 1 ASC, 2 DESC
> ORDER BY date_truncated 
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1768) GROUP BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1768:
--
Description: 
In postgreSQL (and many others DBs) you can specify not only column name for 
the GROUP BY but column number (position in SELECT part) as well as column 
alias.

see:
http://www.postgresql.org/docs/9.4/static/queries-order.html
http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY

Adding such support would be very helpful and sometimes necessary.

I can provide real queries example if required but basically we want something 
like this

given query
SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
we want 
GROUP BY 1, 2

Having just column number would cover both but having column alias would make 
queries more readable and human friendly. Plus make it one little stem closer 
to postgreSQL and SQL standard.

  was:
In postgreSQL (and many others DBs) you can specify not only column name for 
the ORDER BY and GROUP BY but column number (position in SELECT part) as well 
as column alias.

see:
http://www.postgresql.org/docs/9.4/static/queries-order.html
http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY

Adding such support would be very helpful and sometimes necessary.

I can provide real queries example if required but basically we want something 
like this

given query
SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
we want 
ORDER BY 1 ASC, 2 DESC
ORDER BY date_truncated 
and same for GROUP_BY

Having just column number would cover both but having column alias would make 
queries more readable and human friendly. Plus make it one little stem closer 
to postgreSQL and SQL standard.


> GROUP BY should support column position as well as column alias
> ---
>
> Key: PHOENIX-1768
> URL: https://issues.apache.org/jira/browse/PHOENIX-1768
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the GROUP BY but column number (position in SELECT part) as well as column 
> alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> GROUP BY 1, 2
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1768) GROUP BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1768:
-

 Summary: GROUP BY should support column position as well as column 
alias
 Key: PHOENIX-1768
 URL: https://issues.apache.org/jira/browse/PHOENIX-1768
 Project: Phoenix
  Issue Type: Bug
Reporter: Serhiy Bilousov


In postgreSQL (and many others DBs) you can specify not only column name for 
the ORDER BY and GROUP BY but column number (position in SELECT part) as well 
as column alias.

see:
http://www.postgresql.org/docs/9.4/static/queries-order.html
http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY

Adding such support would be very helpful and sometimes necessary.

I can provide real queries example if required but basically we want something 
like this

given query
SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
we want 
ORDER BY 1 ASC, 2 DESC
ORDER BY date_truncated 
and same for GROUP_BY

Having just column number would cover both but having column alias would make 
queries more readable and human friendly. Plus make it one little stem closer 
to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1749) ORDER BY and GROUP BY should support column position as well as column alias

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375135#comment-14375135
 ] 

James Taylor commented on PHOENIX-1749:
---

This would be particularly good to support in conjunction with UNION ALL, as it 
provides a way to specify the order by across all statements being unioned. I 
don't think we need it for GROUP BY (and that would be more difficult without 
providing a lot of benefit). We could push the ORDER BY info into each separate 
statement and then do a final merge sort on the client when the ResultIterators 
are combined.

Supporting this would require two changes:
1. In QueryCompiler.compileSingleFlatQuery(), compute the RowProjector before 
compiling the OrderBy.
2. Pass in the RowProjector to OrderByCompiler and interpret a constant number 
in the order by expressions as in index into the RowProjector expressions.

[~ayingshu], [~maryannxue]

> ORDER BY and GROUP BY should support column position as well as column alias
> 
>
> Key: PHOENIX-1749
> URL: https://issues.apache.org/jira/browse/PHOENIX-1749
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Serhiy Bilousov
>
> In postgreSQL (and many others DBs) you can specify not only column name for 
> the ORDER BY and GROUP BY but column number (position in SELECT part) as well 
> as column alias.
> see:
> http://www.postgresql.org/docs/9.4/static/queries-order.html
> http://www.postgresql.org/docs/9.4/static/sql-select.html#SQL-GROUPBY
> Adding such support would be very helpful and sometimes necessary.
> I can provide real queries example if required but basically we want 
> something like this
> given query
> SELECT a, b, TRUNC(current_date(),'HOUR') AS date_truncated FROM table 
> we want 
> ORDER BY 1 ASC, 2 DESC
> ORDER BY date_truncated 
> and same for GROUP_BY
> Having just column number would cover both but having column alias would make 
> queries more readable and human friendly. Plus make it one little stem closer 
> to postgreSQL and SQL standard.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-538) Support UDFs

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375131#comment-14375131
 ] 

James Taylor commented on PHOENIX-538:
--

bq. 3) I think it's fine to support phoenix data types as argument types right? 
Ex: create function MY_REVERSE(VARCHAR,CHAR) returns VARCHAR
Those are SQL types (not specific to any kind of Phoenix-specific types), and 
yes I agree that seems like it'd be a good way to specify them

bq. 4) Do we need to allow to specify any details for arguments like not null, 
constant, max length,precision,scale? 
Maybe with follow up work? It'd be good to get the basic stuff in first. FWIW, 
the max length, precision, scale are part of the type declaration, so we'd get 
with your proposed syntax. For example: create function power(DECIMAL(10,2), 
INTEGER).

That seems reasonable to have separate commands that manages the dynamic class 
path. Maybe we could even punt on this initially and folks would need to 
manually manage this?

Glad you're pursuing this, [~rajeshbabu] - this will be a nice feature. I'd 
make sure you have a way of disallowing it, though, too just in case folks 
don't want to allow this functionality.



> Support UDFs
> 
>
> Key: PHOENIX-538
> URL: https://issues.apache.org/jira/browse/PHOENIX-538
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>
> Phoenix allows built-in functions to be added (as described 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
>  with the restriction that they must be in the phoenix jar. We should improve 
> on this and allow folks to declare new functions through a CREATE FUNCTION 
> command like this:
>   CREATE FUNCTION mdHash(anytype)
>   RETURNS binary(16)
>   LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
> Since HBase supports loading jars dynamically, this would not be too 
> difficult. The function implementation class would be required to extend our 
> ScalarFunction base class. Here's how I could see it being implemented:
> * modify the phoenix grammar to support the new CREATE FUNCTION syntax
> * create a new UTFParseNode class to capture the parse state
> * add a new method to the MetaDataProtocol interface
> * add a new method in ConnectionQueryServices to invoke the MetaDataProtocol 
> method
> * add a new method in MetaDataClient to invoke the ConnectionQueryServices 
> method
> * persist functions in a new "SYSTEM.FUNCTION" table
> * add a new client-side representation to cache functions called PFunction
> * modify ColumnResolver to dynamically resolve a function in the same way we 
> dynamically resolve and load a table
> * create and register a new ExpressionType called UDFExpression
> * at parse time, check for the function name in the built in list first (as 
> is currently done), and if not found in the PFunction cache. If not found 
> there, then use the new UDFExpression as a placeholder and have the 
> ColumnResolver attempt to resolve it at compile time and throw an error if 
> unsuccessful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-538) Support UDFs

2015-03-22 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375127#comment-14375127
 ] 

James Taylor commented on PHOENIX-538:
--

That's good to hear, [~rajeshbabu]. The central mission of Phoenix is to 
provide ANSI SQL compatibility, so I'd stick with ANSI SQL when there's a 
standard for it. I believe Hive is moving in this direction as well, so perhaps 
we're aligned in this goal? [~julianhyde] - do you know if there's standard 
syntax for defining a function? 

> Support UDFs
> 
>
> Key: PHOENIX-538
> URL: https://issues.apache.org/jira/browse/PHOENIX-538
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>
> Phoenix allows built-in functions to be added (as described 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
>  with the restriction that they must be in the phoenix jar. We should improve 
> on this and allow folks to declare new functions through a CREATE FUNCTION 
> command like this:
>   CREATE FUNCTION mdHash(anytype)
>   RETURNS binary(16)
>   LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
> Since HBase supports loading jars dynamically, this would not be too 
> difficult. The function implementation class would be required to extend our 
> ScalarFunction base class. Here's how I could see it being implemented:
> * modify the phoenix grammar to support the new CREATE FUNCTION syntax
> * create a new UTFParseNode class to capture the parse state
> * add a new method to the MetaDataProtocol interface
> * add a new method in ConnectionQueryServices to invoke the MetaDataProtocol 
> method
> * add a new method in MetaDataClient to invoke the ConnectionQueryServices 
> method
> * persist functions in a new "SYSTEM.FUNCTION" table
> * add a new client-side representation to cache functions called PFunction
> * modify ColumnResolver to dynamically resolve a function in the same way we 
> dynamically resolve and load a table
> * create and register a new ExpressionType called UDFExpression
> * at parse time, check for the function name in the built in list first (as 
> is currently done), and if not found in the PFunction cache. If not found 
> there, then use the new UDFExpression as a placeholder and have the 
> ColumnResolver attempt to resolve it at compile time and throw an error if 
> unsuccessful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [IMPORTANT] Some changes to branches and releases for 4.4+

2015-03-22 Thread James Taylor
I think we can stick with just 4.x-HBase-0.98 and master branch for
now until we need to work simultaneously on a Phoenix release that
supports both HBase 1.0 and HBase 1.1. Seems like the earliest would
be closer to an HBase 1.1 release. Any idea when that might be?
Otherwise, the overhead of keeping master in sync with 4.x-HBase-1.x
is wasted effort (as they'll be exactly the same until then).

Thoughts?

On Fri, Mar 20, 2015 at 4:53 PM, James Taylor  wrote:
> Is this fixed yet? If not, would it be possible for you to set the pom
> to HBase-1.0.1 instead so that master will build? Just don't want to
> leave it in a broken state.
> Thanks,
> James
>
> On Thu, Mar 19, 2015 at 7:31 PM, Enis Söztutar  wrote:
>> About the 4.x-HBase-1.x branch, it seems that I have spoken too soon.
>> Current branch head does not compile with latest HBase-1.1.0-SNAPSHOT:
>>
>> It seems the RegionScanner changes are the problem. Let me look into how we
>> can resolve those for future compatibility.
>>
>> Enis
>>
>> On Thu, Mar 19, 2015 at 2:15 PM, Enis Söztutar  wrote:
>>
>>> Hi,
>>>
>>> As per private PMC threads and the dev discussions [1], I have created two
>>> new branches for 4.x development for supporting both HBase-0.98 and
>>> HBase-1.0 versions. The goal is to have 4.4.0 and 4.5.0, etc releases which
>>> support both of the HBase versions and possibly HBase-1.1.0+ as well.
>>>
>>> See [1] for why the branches are needed (this seems like the least bad
>>> approach). Here are the changes I did for this:
>>>
>>> BRANCH CHANGES:
>>> - Committed PHOENIX-1642 to master
>>> - Created branch-4.x-HBase-0.98. Pushed to git repo
>>> - Created branch-4.x-HBase-1.x. Pushed to git repo
>>> - Changed versions to be 4.4.0-HBase-0.98-SNAPSHOT and
>>> 4.4.0-HBase-1.x-SNAPSHOT respectively in above branches
>>> - Cherry-picked PHOENIX-1642 to branch-4.x-HBase-1.x
>>> - Deleted branch named "4.0". (there is no rename of branches in git)
>>>
>>> I have named the branch 4.x-HBase-1.x instead of suffix HBase-1.0 in hopes
>>> that further HBase-1.1, 1.2 can be supported in this branch and we can get
>>> away without branching again for 1.1. See especially HBASE-12972. We can
>>> change this later on if it is not the case.
>>>
>>>
>>> JENKINS CHANGES:
>>> - Disabled Phoenix-4.0 job (Lets keep it around for a couple of days just
>>> in case)
>>> - Created new jobs for these two branches:
>>>
>>> https://builds.apache.org/view/All/job/Phoenix-4.x-HBase-0.98/
>>> https://builds.apache.org/job/Phoenix-4.x-HBase-1.x/
>>>
>>> The build should be similar to the previous 4.0 branch builds.
>>>
>>>
>>> JIRA CHANGES:
>>>  - Renamed release version 4.4 in jira to 4.4.0
>>>
>>>
>>> Further changes coming shortly unless objection:
>>>  - Delete jenkins job
>>> https://builds.apache.org/view/All/job/Phoenix%202.0/  (does not seem to
>>> be used for more than 1 year)
>>>  - Delete jenkins job https://builds.apache.org/view/All/job/Phoenix-2.0/
>>>  - Delete jenkins job https://builds.apache.org/view/All/job/Phoenix-4.0/
>>>
>>>
>>> How does this affect development and releases?
>>>  - Current master is version 5.0.0-SNAPSHOT. It builds with
>>> HBase-1.0.1-SNAPSHOT (from apache snapshots repo).
>>>  - branch-4.x-HBase-0.98 is very similar to old 4.0 branch. It builds with
>>> HBase-0.98.9-hadoop2
>>>  - branch-4.x-HBase-1.x is forked from branch-4.x-HBase-0.98 and builds
>>> with HBase-1.0.1-SNAPSHOT.
>>>  - There should be two release artifacts (or releases simultaneously) for
>>> 4.4 release. One will have version 4.4.0-HBase-0.98 and the other
>>> 4.4.0-HBase-1.x. We can make it so that the RM creates both releases at the
>>> same time, and the VOTE applies to both releases.
>>>  - All changes MUST be committed to both branches for future 4.x releases
>>> unless it is HBase version specific. There is no way to auto-enforce it, so
>>> all committers should take this into account. The patches might differ
>>> sligtly. Before the release RM may do some manual checks to ensure that
>>> every patch is commmitted to both branches.
>>>  - Old 4.0 is deleted from git repository. Please re-check or rename your
>>> local branches. Please do not push anything there (as it will re-create the
>>> branch).
>>>  - There is only one jira version 4.4.0, which should apply equally to
>>> both release versions. If needed we can differentiate these in jira as
>>> well. Let me know.
>>>  - Before the 4.4.0 release, RM should fork both 4.x branches and name
>>> them 4.4-HBase-XXX. At that time, we will have 1 master branch, 2 of 4.x
>>> branches and 2 of 4.4 branches.
>>>
>>> Let me know if you have further concerns. Let's see how well this process
>>> works.
>>>
>>> Thanks,
>>> Enis
>>>
>>> Ref:
>>> [1] http://search-hadoop.com/m/lz2la1GgkPx
>>>
>>>


[jira] [Created] (PHOENIX-1767) java.lang.NoSuchFieldError: NO_NEXT_INDEXED_KEY

2015-03-22 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1767:
-

 Summary: java.lang.NoSuchFieldError: NO_NEXT_INDEXED_KEY
 Key: PHOENIX-1767
 URL: https://issues.apache.org/jira/browse/PHOENIX-1767
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0
Reporter: James Taylor
Assignee: Enis Soztutar


Unit tests on master/4.x-HBase-1.x branch are failing with the following:

Caused by: java.lang.NoSuchFieldError: NO_NEXT_INDEXED_KEY
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderV2$AbstractScannerV2.reseekTo(HFileReaderV2.java:590)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:267)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:181)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:312)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:256)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:729)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.seekAsDirection(StoreScanner.java:715)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:565)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:139)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4307)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4387)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:4266)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:4244)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.next(HRegion.java:4231)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:480)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:377)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.loadTable(MetaDataEndpointImpl.java:691)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:769)
... 10 more




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-538) Support UDFs

2015-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375110#comment-14375110
 ] 

Rajeshbabu Chintaguntla edited comment on PHOENIX-538 at 3/22/15 7:00 PM:
--

[~jamestaylor]
I am working on this and about to complete. I had some doubts can you please 
clarify.
1) bq. Since HBase supports loading jars dynamically, this would not be too 
difficult.
To support dynamically loading the jars we can use DynamicClassLoader in HBase 
and We should configure hbase.dynamic.jars.dir with the path of jars in hdfs.
This creates hadoop jar dependency at phoenix side because we need to have file 
system related classes in classpath which are present in 
hadoop-common,hadoop-hdfs. We need to check any way we can avoid this 
dependency. 
2) You have mentioned that creation function syntax should be as below
{noformat}
CREATE FUNCTION mdHash(anytype)
RETURNS binary(16)
LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
{noformat}

I think it's better to use same syntax as Hive to make it consistent with it. 
What do you say?
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
  [USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];
{noformat}
And also we can support add/delete jar queries to add jars to path of 
hbase.dynamic.jars.dir configuration independently before creating function so 
that we can use simplified version of create function. This can be done as 
improvement later. suggestions?
{noformat}
ADD JAR[S]  []*
LIST JAR[S] [  ..]
DELETE JAR[S] [  ..] 
{noformat}
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
{noformat}

3) If we come to function arguments I think it's fine to support phoenix data 
types as argument types right?
Ex: create function MY_REVERSE(VARCHAR,CHAR) returns VARCHAR
 or we need to support java data types(This makes things complex like mapping 
java types to phoenix types)? 

4)Do we need to allow to specify any details for arguments like not null, 
constant, max length,precision,scale? Currently we are providing these through 
annotations in built-in functions classes.These details helps to validate 
whether proper function or not. If we don't want to support this UDF developer 
should take care of providing allowed arguments and details same as for buitin 
functions.


was (Author: rajeshbabu):
[~jamestaylor]
I am working on this and about to complete this. I had some doubts can you 
please clarify.
1) bq. Since HBase supports loading jars dynamically, this would not be too 
difficult.
To support dynamically loading the jars we can use DynamicClassLoader and We 
should configure hbase.dynamic.jars.dir with the path of jars in hdfs.
This creates hadoop jar dependency at phoenix side because we need to have file 
system related classes in classpath which are present in 
hadoop-common,hadoop-hdfs. We need to check any way we can avoid this 
dependency. 
2) You have mentioned creation function syntax should be as below
{noformat}
CREATE FUNCTION mdHash(anytype)
RETURNS binary(16)
LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
{noformat}

I think it's better to use same syntax as Hive to make it consistent with Hive. 
What do you say?
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
  [USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];
{noformat}
And also we can support add/delete jar queries to add jars to path of 
hbase.dynamic.jars.dir configuration independently before creating function so 
that we can use simplified version of create function. This can be done as 
improvement later. suggestions?
{noformat}
ADD JAR[S]  []*
LIST JAR[S] [  ..]
DELETE JAR[S] [  ..] 
{noformat}
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
{noformat}

3) If we come to function arguments I think it's fine to support phoenix data 
types as argument types right?
Ex: create function MY_REVERSE(VARCHAR,CHAR) returns VARCHAR
 or we need to support java data types(This makes things complex like mapping 
java types to phoenix types)? 

4)Do we need to allow to specify any details for arguments like not null, 
constant, max length,precision,scale? Currently we are providing these through 
annotations in built-in functions classes.These details helps to validate 
whether proper function or not. If we don't want to support this UDF developer 
should take care of these same as builtin functions. 

> Support UDFs
> 
>
> Key: PHOENIX-538
> URL: https://issues.apache.org/jira/browse/PHOENIX-538
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>
> Phoenix allows built-in functions to be added (as described 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
>  with

[jira] [Commented] (PHOENIX-538) Support UDFs

2015-03-22 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14375110#comment-14375110
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-538:
-

[~jamestaylor]
I am working on this and about to complete this. I had some doubts can you 
please clarify.
1) bq. Since HBase supports loading jars dynamically, this would not be too 
difficult.
To support dynamically loading the jars we can use DynamicClassLoader and We 
should configure hbase.dynamic.jars.dir with the path of jars in hdfs.
This creates hadoop jar dependency at phoenix side because we need to have file 
system related classes in classpath which are present in 
hadoop-common,hadoop-hdfs. We need to check any way we can avoid this 
dependency. 
2) You have mentioned creation function syntax should be as below
{noformat}
CREATE FUNCTION mdHash(anytype)
RETURNS binary(16)
LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
{noformat}

I think it's better to use same syntax as Hive to make it consistent with Hive. 
What do you say?
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
  [USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];
{noformat}
And also we can support add/delete jar queries to add jars to path of 
hbase.dynamic.jars.dir configuration independently before creating function so 
that we can use simplified version of create function. This can be done as 
improvement later. suggestions?
{noformat}
ADD JAR[S]  []*
LIST JAR[S] [  ..]
DELETE JAR[S] [  ..] 
{noformat}
{noformat}
CREATE [TEMPORARY] FUNCTION function_name AS class_name
{noformat}

3) If we come to function arguments I think it's fine to support phoenix data 
types as argument types right?
Ex: create function MY_REVERSE(VARCHAR,CHAR) returns VARCHAR
 or we need to support java data types(This makes things complex like mapping 
java types to phoenix types)? 

4)Do we need to allow to specify any details for arguments like not null, 
constant, max length,precision,scale? Currently we are providing these through 
annotations in built-in functions classes.These details helps to validate 
whether proper function or not. If we don't want to support this UDF developer 
should take care of these same as builtin functions. 

> Support UDFs
> 
>
> Key: PHOENIX-538
> URL: https://issues.apache.org/jira/browse/PHOENIX-538
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
>
> Phoenix allows built-in functions to be added (as described 
> [here](http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html))
>  with the restriction that they must be in the phoenix jar. We should improve 
> on this and allow folks to declare new functions through a CREATE FUNCTION 
> command like this:
>   CREATE FUNCTION mdHash(anytype)
>   RETURNS binary(16)
>   LOCATION 'hdfs://path-to-my-jar' 'com.me.MDHashFunction'
> Since HBase supports loading jars dynamically, this would not be too 
> difficult. The function implementation class would be required to extend our 
> ScalarFunction base class. Here's how I could see it being implemented:
> * modify the phoenix grammar to support the new CREATE FUNCTION syntax
> * create a new UTFParseNode class to capture the parse state
> * add a new method to the MetaDataProtocol interface
> * add a new method in ConnectionQueryServices to invoke the MetaDataProtocol 
> method
> * add a new method in MetaDataClient to invoke the ConnectionQueryServices 
> method
> * persist functions in a new "SYSTEM.FUNCTION" table
> * add a new client-side representation to cache functions called PFunction
> * modify ColumnResolver to dynamically resolve a function in the same way we 
> dynamically resolve and load a table
> * create and register a new ExpressionType called UDFExpression
> * at parse time, check for the function name in the built in list first (as 
> is currently done), and if not found in the PFunction cache. If not found 
> there, then use the new UDFExpression as a placeholder and have the 
> ColumnResolver attempt to resolve it at compile time and throw an error if 
> unsuccessful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1611) Support ABS function

2015-03-22 Thread renmin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14374914#comment-14374914
 ] 

renmin edited comment on PHOENIX-1611 at 3/22/15 12:55 PM:
---

Hi,
   Could i try to implement this function? because there are many other 
functions  to refer to, unlike PHOENIX-1661.
   Thanks.


was (Author: liurm):
Hi,
   Could i try to implement this function? because there are many other 
functions  to refer to, unlike phoenix-1610.
   thanks.

> Support ABS function 
> -
>
> Key: PHOENIX-1611
> URL: https://issues.apache.org/jira/browse/PHOENIX-1611
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0
> Environment: Support ABS Function. 
>Reporter: SeonghwanMoon
>Assignee: SeonghwanMoon
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1611) Support ABS function

2015-03-22 Thread renmin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14374914#comment-14374914
 ] 

renmin commented on PHOENIX-1611:
-

Hi,
   Could i try to implement this function? because there are many other 
functions  to refer to, unlike phoenix-1610.
   thanks.

> Support ABS function 
> -
>
> Key: PHOENIX-1611
> URL: https://issues.apache.org/jira/browse/PHOENIX-1611
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0
> Environment: Support ABS Function. 
>Reporter: SeonghwanMoon
>Assignee: SeonghwanMoon
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


The import org.apache.phoenix.end2end cannot be resolved

2015-03-22 Thread 刘仁敏
Hi,
There was an package not found error in phoenix-core project after i
imported the phoenix. i am sure that my maven pom.xml has updated and
pulled code again, who can help me to solve it? thanks.