[jira] [Commented] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354721#comment-14354721
 ] 

ASF GitHub Bot commented on PHOENIX-1715:
-

Github user shuxiong commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26112698
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Hi James,

One question:

In PDataType.java,

  public abstract Object toObject(byte[] bytes, int offset, int length, 
PDataType actualType,
  SortOrder sortOrder, Integer maxLength, Integer scale);

What does SortOrder sortOrder, Integer maxLength, Integer scale mean here?

---

I'd like to take the second solution, which introduces a new type 
PNumericType. 

So in PNumericType, we have an abstract function getSign, which is like 
function toObject in PDataType, which we implement the details in each 
subclass, PInteger, PFloat etc.

How about the solution above?

Thanks.


 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354719#comment-14354719
 ] 

ASF GitHub Bot commented on PHOENIX-1715:
-

Github user shuxiong commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26112576
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Hi James,

One question:

In PDataType.java,

  public abstract Object toObject(byte[] bytes, int offset, int length, 
PDataType actualType,
  SortOrder sortOrder, Integer maxLength, Integer scale);

What does SortOrder sortOrder, Integer maxLength, Integer scale mean here?

---

I'd like to take the second solution, which introduces a new type 
PNumericType. 

So in PNumericType, we have an abstract function getSign, which is like 
function toObject in PDataType, which we implement the details in each 
subclass, PInteger, PFloat etc.

How about the solution above?

Thanks.


 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread shuxiong
Github user shuxiong commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26112698
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Hi James,

One question:

In PDataType.java,

  public abstract Object toObject(byte[] bytes, int offset, int length, 
PDataType actualType,
  SortOrder sortOrder, Integer maxLength, Integer scale);

What does SortOrder sortOrder, Integer maxLength, Integer scale mean here?

---

I'd like to take the second solution, which introduces a new type 
PNumericType. 

So in PNumericType, we have an abstract function getSign, which is like 
function toObject in PDataType, which we implement the details in each 
subclass, PInteger, PFloat etc.

How about the solution above?

Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread shuxiong
Github user shuxiong commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26112576
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Hi James,

One question:

In PDataType.java,

  public abstract Object toObject(byte[] bytes, int offset, int length, 
PDataType actualType,
  SortOrder sortOrder, Integer maxLength, Integer scale);

What does SortOrder sortOrder, Integer maxLength, Integer scale mean here?

---

I'd like to take the second solution, which introduces a new type 
PNumericType. 

So in PNumericType, we have an abstract function getSign, which is like 
function toObject in PDataType, which we implement the details in each 
subclass, PInteger, PFloat etc.

How about the solution above?

Thanks.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-10 Thread Dumindu Buddhika (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dumindu Buddhika updated PHOENIX-1705:
--
Attachment: PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch

Patch for ARRAY_APPEND.

Hi [~jamestaylor], I have written code to append elements to variable length 
arrays as well and created some tests. Can you review the code :)? I have used 
coerceBytes method with castible datatypes as you mentioned earlier.I have 
tested the code with varchar, int, double, bigint arrays. Tests run properly 
for them.

However there is a problem I couldn't properly solve with fixed length char 
arrays. Since in a Char array, every element has a previously defined fixed 
length,
ex 
{code}
CHAR(15)[]
{code}

When an element appended to this kind of an array, second argument(element to 
be appended) comes as a varchar. So I need to convert this varchar to a byte 
array with a length of expression.getMaxLength()(in above example it would be 
15). I tried to use baseType.coerceBytes method with  expression.getMaxLength() 
as desiredMaxLength. But it only gives a byte array with only the varchar(not 
of the expression.getMaxLength()). I can create an array of 
expression.getMaxLength() and fill the extra bytes in the Char array case. But 
I am not sure it is the best way to do it. Is there a way to achieve this?

I have done type checking in the constructor, it throws an 
IllegalArgumentException if type validation fails. 

Testcase for the char arrays fails due to the problem I mentioned.


 implement ARRAY_APPEND built in function
 

 Key: PHOENIX-1705
 URL: https://issues.apache.org/jira/browse/PHOENIX-1705
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika
 Attachments: 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [jira] [Commented] (PHOENIX-1711) Improve performance of CSV loader

2015-03-10 Thread Sergey Belousov
It compiles fine for me (with some white spaces warnings)
but I can not pass the tests
stuck on
Running org.apache.phoenix.util.StringUtilTest
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.002 sec -
in org.apache.phoenix.util.StringUtilTest
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option
MaxPermSize=128m; support was removed in 8.0

But I am pretty sure it just me (Windows 7, Java 8) . don't ask :)


On Tue, Mar 10, 2015 at 5:05 PM, James Taylor (JIRA) j...@apache.org
wrote:


 [
 https://issues.apache.org/jira/browse/PHOENIX-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355710#comment-14355710
 ]

 James Taylor commented on PHOENIX-1711:
 ---

 You need to get the latest from the 4.0 branch in our git repo:
 https://git-wip-us.apache.org/repos/asf/phoenix.git
 {code}
 git clone https://git-wip-us.apache.org/repos/asf/phoenix.git
 git checkout 4.0
 git apply PHOENIX-1711_4.0.patch
 mvn clean
 mvn package -DskipTests
 {code}

  Improve performance of CSV loader
  -
 
  Key: PHOENIX-1711
  URL: https://issues.apache.org/jira/browse/PHOENIX-1711
  Project: Phoenix
   Issue Type: Bug
 Reporter: James Taylor
  Attachments: PHOENIX-1711.patch, PHOENIX-1711_4.0.patch
 
 
  Here is a break-up of percentage execution time for some of the steps
 inthe mapper:
  csvParser: 18%
  csvUpsertExecutor.execute(ImmutableList.of(csvRecord)): 39%
  PhoenixRuntime.getUncommittedDataIterator(conn, true): 9%
  while (uncommittedDataIterator.hasNext()): 15%
  Read IO  custom processing: 19%
  See details here: http://s.apache.org/6rl



 --
 This message was sent by Atlassian JIRA
 (v6.3.4#6332)



[jira] [Commented] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-10 Thread Dumindu Buddhika (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356055#comment-14356055
 ] 

Dumindu Buddhika commented on PHOENIX-1705:
---

Great!. Thank you.

 implement ARRAY_APPEND built in function
 

 Key: PHOENIX-1705
 URL: https://issues.apache.org/jira/browse/PHOENIX-1705
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika
 Attachments: 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356120#comment-14356120
 ] 

James Taylor commented on PHOENIX-1577:
---

+1. Thanks, [~samarthjain].

 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch, PHOENIX-1577_v2.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/41#discussion_r26189140
  
--- Diff: phoenix-pherf/cluster/pherf.sh ---
@@ -0,0 +1,33 @@
+#!/bin/bash
--- End diff --

Please convert this to a python script and put it in phoenix-bin instead. 
All our scripts live there and are in python so they work reliably on Windows.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1677) Immutable index deadlocks when number of guideposts are one half of thread pool size

2015-03-10 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356133#comment-14356133
 ] 

Samarth Jain commented on PHOENIX-1677:
---

+1 for getting it in 4.3.1. Although, it would be prudent to see if this 
results in any perf impact especially for concurrent queries with high degree 
of parallelism before this is committed.

 Immutable index deadlocks when number of guideposts are one half of thread 
 pool size
 

 Key: PHOENIX-1677
 URL: https://issues.apache.org/jira/browse/PHOENIX-1677
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.3
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Attachments: PHOENIX-1677_utest.patch, PHOENIX-1677_v2.patch


 If total number of parallel iterators + mutation count exceeds Phoenix thread 
 pool size then immutable index remains at 0 rows with no activity on client 
 and server.
 Ex. Only with guide post count of 64 or lower does immutable index gets build 
 with default 128 thread pool. By changing guide post width to get with 64 
 guideposts, index fails to build. Also if thread pool size is lowered from 
 128 then index fails to build for 64 guide posts.
 Let me know if you need a utest as I think it would be easy to repro in it 
 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/41#discussion_r26189106
  
--- Diff: phoenix-pherf/README.md ---
@@ -0,0 +1,105 @@
+Pherf is a performance test framework that exercises HBase through Apache 
Phoenix, a SQL layer interface.
--- End diff --

This readme is nice. Would you mind creating a markdown version of it for 
our site and adding a menu item under Using (in site.xml) that points to it? 
See the About menu for a link for how to update the website.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356145#comment-14356145
 ] 

Hudson commented on PHOENIX-1577:
-

SUCCESS: Integrated in Phoenix-master #610 (See 
[https://builds.apache.org/job/Phoenix-master/610/])
PHOENIX-1577 addendum - fix bug in  resultSet.getTimeStamp(int colIndex, 
Calendar cal) too (samarth.jain: rev 80a50c964e94608c263471c1eca0c81e44ea5b19)
* phoenix-core/src/main/java/org/apache/phoenix/jdbc/PhoenixResultSet.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TimezoneOffsetFunctionIT.java


 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch, PHOENIX-1577_v2.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/41#discussion_r26189030
  
--- Diff: phoenix-assembly/pom.xml ---
@@ -20,131 +20,139 @@
 
 --
 
-project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
-  xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
-  modelVersion4.0.0/modelVersion
-  parent
-groupIdorg.apache.phoenix/groupId
-artifactIdphoenix/artifactId
-version5.0.0-SNAPSHOT/version
-  /parent
-  artifactIdphoenix-assembly/artifactId
-  namePhoenix Assembly/name
-  descriptionAssemble Phoenix artifacts/description
-  packagingpom/packaging
+project xmlns=http://maven.apache.org/POM/4.0.0;
+ xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance;
+ xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+modelVersion4.0.0/modelVersion
+parent
+groupIdorg.apache.phoenix/groupId
+artifactIdphoenix/artifactId
+version5.0.0-SNAPSHOT/version
+/parent
+artifactIdphoenix-assembly/artifactId
+namePhoenix Assembly/name
+descriptionAssemble Phoenix artifacts/description
+packagingpom/packaging
 
-  build
-plugins
-  plugin
-artifactIdmaven-assembly-plugin/artifactId
-executions
-  execution
-idclient/id
-phasepackage/phase
-goals
-  goalsingle/goal
-/goals
-configuration
-  attachfalse/attach
-  finalNamephoenix-${project.version}/finalName
-  archive
-indextrue/index
-manifest
-  addClasspathtrue/addClasspath
-  
addDefaultImplementationEntriestrue/addDefaultImplementationEntries
-  
addDefaultSpecificationEntriestrue/addDefaultSpecificationEntries
-/manifest
-  /archive
-  descriptors
-descriptorsrc/build/client.xml/descriptor
-  /descriptors
-/configuration
-  /execution
-  execution
-idpackage-to-tar/id
-phasepackage/phase
-goals
-  goalsingle/goal
-/goals
-configuration
-finalNamephoenix-${project.version}/finalName
-  attachfalse/attach
-  tarLongFileModegnu/tarLongFileMode
-  appendAssemblyIdfalse/appendAssemblyId
-  descriptors
-descriptorsrc/build/package-to-tar-all.xml/descriptor
-  /descriptors
-  tarLongFileModeposix/tarLongFileMode
-/configuration
-  /execution
-  execution
-idpackage-to-source-tar/id
-phasepackage/phase
-goals
-  goalsingle/goal
-/goals
-configuration
-finalNamephoenix-${project.version}-source/finalName
-  attachfalse/attach
-  tarLongFileModegnu/tarLongFileMode
-  appendAssemblyIdfalse/appendAssemblyId
-  descriptors
-descriptorsrc/build/src.xml/descriptor
-  /descriptors
-  tarLongFileModeposix/tarLongFileMode
-/configuration
-  /execution  
-  execution
-idclient-minimal/id
-phasepackage/phase
-goals
-  goalsingle/goal
-/goals
-configuration
-finalNamephoenix-${project.version}/finalName
-  attachfalse/attach
-  appendAssemblyIdtrue/appendAssemblyId
-  descriptors
-   !--build the phoenix client jar, but without HBase code. 
--
-descriptorsrc/build/client-without-hbase.xml/descriptor
-   !-- build the phoenix client jar, but without HBase (or 
its depenencies). --
-descriptorsrc/build/client-minimal.xml/descriptor
-   !-- build the phoenix server side jar, that includes 
phoenix-hadoopX-compat, phoenix-hadoop-compat and antlr --
-descriptorsrc/build/server.xml/descriptor
-   !-- build the phoenix server side jar, that includes 
phoenix-hadoopX-compat and phoenix-hadoop-compat. --
-descriptorsrc/build/server-without-antlr.xml/descriptor
-  /descriptors
-/configuration
-  /execution

[jira] [Commented] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355165#comment-14355165
 ] 

Hudson commented on PHOENIX-1709:
-

SUCCESS: Integrated in Phoenix-master #609 (See 
[https://builds.apache.org/job/Phoenix-master/609/])
PHOENIX-1709 And expression of primary key RVCs can not compile (jtaylor: rev 
dee6f9d3850e80ea6bb2bd9d002b4acba15750ea)
* phoenix-core/src/main/java/org/apache/phoenix/schema/RowKeySchema.java
* phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java
* phoenix-core/src/main/java/org/apache/phoenix/compile/WhereOptimizer.java


 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-1724) Update MetaDataProtocol patch version for 4.3.1

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-1724.
---
   Resolution: Fixed
Fix Version/s: 4.3.1
 Assignee: James Taylor

 Update MetaDataProtocol patch version for 4.3.1
 ---

 Key: PHOENIX-1724
 URL: https://issues.apache.org/jira/browse/PHOENIX-1724
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.3.1

 Attachments: PHOENIX-1724.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-1653) Allow option to pass peer zookeeper address to load data into a target cluster in Map Reduce api

2015-03-10 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355343#comment-14355343
 ] 

Geoffrey Jacoby edited comment on PHOENIX-1653 at 3/10/15 6:07 PM:
---

Revised patch for PHOENIX-1653 incorporating feedback. In particular, there are 
now dedicated getInputConnection and getOutputConnection methods in 
ConnectionUtil (and references to the former getConnection method have been 
updated), plus I've cleaned up the Javadoc and formatting issues that the code 
review identified.  


was (Author: gjacoby):
Revised patch for PHOENIX-1653 incorporating feedback. In particular, there are 
now dedicated getInputConnection and getOutputConnection methods in 
ConnectionUtil (and references to the former getConnection method have been 
updated), plus I've cleaned up some of the Javadoc and formatting issues that 
the code review identified.  

 Allow option to pass peer zookeeper address to load data into a target 
 cluster in Map Reduce api
 

 Key: PHOENIX-1653
 URL: https://issues.apache.org/jira/browse/PHOENIX-1653
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0
Reporter: maghamravikiran
  Labels: newbie, patch
 Attachments: PHOENIX-1653.patch, PHOENIX-1653v2.patch


 Provide an option to pass the peer zookeeper address within a MapReduce job 
 where PhoenixInputFormat reads from one HBase cluster, and 
 PhoenixOutputFormat writes to a different cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355331#comment-14355331
 ] 

Samarth Jain commented on PHOENIX-1709:
---

+1 on getting this in for 4.3.1 as well.

 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 3.3.1, 4.4

 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1691) Allow settting sampling rate while enabling tracing.

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355368#comment-14355368
 ] 

James Taylor commented on PHOENIX-1691:
---

{quote}
Currently what ever sampler first chosen sticking with that. I am thinking 
whether we can throw exception in this case to suggest the user that already 
trace enabled and to change the sampler rate disable the trace and enable with 
new sampling rate. What do you say?
{quote}
If possible, it'd be nice to be able to update the sampling rate on the 
connection, rather than keeping the first on. If that's not possible/feasible, 
then throwing an exception is the next best option.

 Allow settting sampling rate while enabling tracing.
 

 Key: PHOENIX-1691
 URL: https://issues.apache.org/jira/browse/PHOENIX-1691
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1691.patch


 Now we can dynamically enable/disable tracing from query. We should also be 
 able to set sampling rate while enabling tracing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1653) Allow option to pass peer zookeeper address to load data into a target cluster in Map Reduce api

2015-03-10 Thread Geoffrey Jacoby (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-1653:
-
Attachment: PHOENIX-1653v2.patch

Revised patch for PHOENIX-1653 incorporating feedback. In particular, there are 
now dedicated getInputConnection and getOutputConnection methods in 
ConnectionUtil (and references to the former getConnection method have been 
updated), plus I've cleaned up some of the Javadoc and formatting issues that 
the code review identified.  

 Allow option to pass peer zookeeper address to load data into a target 
 cluster in Map Reduce api
 

 Key: PHOENIX-1653
 URL: https://issues.apache.org/jira/browse/PHOENIX-1653
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0
Reporter: maghamravikiran
  Labels: newbie, patch
 Attachments: PHOENIX-1653.patch, PHOENIX-1653v2.patch


 Provide an option to pass the peer zookeeper address within a MapReduce job 
 where PhoenixInputFormat reads from one HBase cluster, and 
 PhoenixOutputFormat writes to a different cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1709:
--
Fix Version/s: 4.3.1

 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 3.3.1, 4.3.1, 4.4

 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1724) Update MetaDataProtocol patch version for 4.3.1

2015-03-10 Thread James Taylor (JIRA)
James Taylor created PHOENIX-1724:
-

 Summary: Update MetaDataProtocol patch version for 4.3.1
 Key: PHOENIX-1724
 URL: https://issues.apache.org/jira/browse/PHOENIX-1724
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1677) Immutable index deadlocks when number of guideposts are one half of thread pool size

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356078#comment-14356078
 ] 

James Taylor commented on PHOENIX-1677:
---

Test looks good, but I think ImmutableIndexIT (or it's own class) may be a 
better home. Does it repro with the test and then does my fix fix it?

[~samarthjain] - this may be a potential one to get into 4.3.1. What do you 
think, [~lhofhansl]?

 Immutable index deadlocks when number of guideposts are one half of thread 
 pool size
 

 Key: PHOENIX-1677
 URL: https://issues.apache.org/jira/browse/PHOENIX-1677
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.3
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Attachments: PHOENIX-1677_utest.patch, PHOENIX-1677_v2.patch


 If total number of parallel iterators + mutation count exceeds Phoenix thread 
 pool size then immutable index remains at 0 rows with no activity on client 
 and server.
 Ex. Only with guide post count of 64 or lower does immutable index gets build 
 with default 128 thread pool. By changing guide post width to get with 64 
 guideposts, index fails to build. Also if thread pool size is lowered from 
 128 then index fails to build for 64 guide posts.
 Let me know if you need a utest as I think it would be easy to repro in it 
 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/41#discussion_r26189248
  
--- Diff: 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/loaddata/DataLoader.java 
---
@@ -0,0 +1,366 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   License); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an AS IS BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf.loaddata;
+
+import java.math.BigDecimal;
+import java.sql.Connection;
+import java.sql.Date;
+import java.sql.PreparedStatement;
+import java.sql.SQLException;
+import java.sql.Types;
+import java.text.SimpleDateFormat;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Properties;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+
+import org.apache.phoenix.pherf.result.ResultUtil;
+import org.apache.phoenix.pherf.util.ResourceList;
+import org.apache.phoenix.pherf.util.RowCalculator;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.phoenix.pherf.PherfConstants;
+import org.apache.phoenix.pherf.configuration.Column;
+import org.apache.phoenix.pherf.configuration.DataModel;
+import org.apache.phoenix.pherf.configuration.Scenario;
+import org.apache.phoenix.pherf.configuration.XMLConfigParser;
+import org.apache.phoenix.pherf.exception.PherfException;
+import org.apache.phoenix.pherf.result.DataLoadThreadTime;
+import org.apache.phoenix.pherf.result.DataLoadTimeSummary;
+import org.apache.phoenix.pherf.rules.DataValue;
+import org.apache.phoenix.pherf.rules.RulesApplier;
+import org.apache.phoenix.pherf.util.PhoenixUtil;
+
+public class DataLoader {
+private static final Logger logger = 
LoggerFactory.getLogger(DataLoader.class);
+private final PhoenixUtil pUtil = new PhoenixUtil();
+private final XMLConfigParser parser;
+private final RulesApplier rulesApplier;
+private final ResultUtil resultUtil;
+private final ExecutorService pool;
+private final Properties properties;
+
+private final int threadPoolSize;
+private final int batchSize;
+
+public DataLoader(XMLConfigParser parser) throws Exception {
+this(new ResourceList().getProperties(), parser);
+}
+
+/**
+ * Default the writers to use up all available cores for threads.
+ *
+ * @param parser
+ * @throws Exception
+ */
+public DataLoader(Properties properties, XMLConfigParser parser) 
throws Exception {
+this.parser = parser;
+this.properties = properties;
+this.rulesApplier = new RulesApplier(this.parser);
+this.resultUtil = new ResultUtil();
+int size = 
Integer.parseInt(properties.getProperty(pherf.default.dataloader.threadpool));
+this.threadPoolSize = (size == 0) ? 
Runtime.getRuntime().availableProcessors() : size;
+this.pool = Executors.newFixedThreadPool(this.threadPoolSize);
+String bSize = 
properties.getProperty(pherf.default.dataloader.batchsize);
+this.batchSize = (bSize == null) ? 
PherfConstants.DEFAULT_BATCH_SIZE : Integer.parseInt(bSize);
+}
+
+public void execute() throws Exception {
+try {
+DataModel model = getParser().getDataModels().get(0);
+DataLoadTimeSummary dataLoadTimeSummary = new 
DataLoadTimeSummary();
+DataLoadThreadTime dataLoadThreadTime = new 
DataLoadThreadTime();
+
+for (Scenario scenario : getParser().getScenarios()) {
+ListFuture writeBatches = new ArrayListFuture();
+logger.info(\nLoading  + scenario.getRowCount()
++  rows for  + scenario.getTableName());
+long start = System.currentTimeMillis();
+
+RowCalculator rowCalculator = new 

[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/41#discussion_r26189190
  
--- Diff: 
phoenix-pherf/src/main/java/org/apache/phoenix/pherf/PherfConstants.java ---
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ *   or more contributor license agreements.  See the NOTICE file
+ *   distributed with this work for additional information
+ *   regarding copyright ownership.  The ASF licenses this file
+ *   to you under the Apache License, Version 2.0 (the
+ *   License); you may not use this file except in compliance
+ *   with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ *   Unless required by applicable law or agreed to in writing, software
+ *   distributed under the License is distributed on an AS IS BASIS,
+ *   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
+ *   See the License for the specific language governing permissions and
+ *   limitations under the License.
+ */
+
+package org.apache.phoenix.pherf;
+
+public class PherfConstants {
+public static final int DEFAULT_THREAD_POOL_SIZE = 10;
+public static final int DEFAULT_BATCH_SIZE = 1000;
+public static final String DEFAULT_DATE_PATTERN = -MM-dd 
HH:mm:ss.SSS;
+public static final String DEFAULT_FILE_PATTERN = .*scenario.xml;
+public static final String RESOURCE_SCENARIO = /scenario;
+public static final String SCENARIO_ROOT_PATTERN = .* + 
PherfConstants.RESOURCE_SCENARIO.substring(1) + .*;
+public static final String SCHEMA_ROOT_PATTERN = .*;
+public static final String PHERF_PROPERTIES = pherf.properties;
+   public static final String RESULT_DIR = RESULTS;
--- End diff --

Minor: indenting


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: working additions of pherf to phoenix as a p...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on the pull request:

https://github.com/apache/phoenix/pull/41#issuecomment-78206018
  
Fantastic work, Cody and Mujtaba. Would be good if you reviewed this too, 
@mujtabachohan. As a follow on check-in:
* Change your test that rely on a cluster to use our BaseTest class instead 
(so it uses the mini cluster instead of relying on a cluster).
* For these tests, put them under src/it (for integration tests), as these 
are the longer running tests that run on maven verify.
* Add a markdown page to our website so folks know how to use it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1718) Unable to find cached index metadata during the stablity test with phoenix

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14356350#comment-14356350
 ] 

James Taylor commented on PHOENIX-1718:
---

[~rajeshbabu] - any ideas/advice?

 Unable to find cached index metadata during the stablity test with phoenix
 --

 Key: PHOENIX-1718
 URL: https://issues.apache.org/jira/browse/PHOENIX-1718
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
 Environment: linux os ( 128G ram,48T disk,24 cores) * 8
 Hadoop 2.5.1
 HBase 0.98.7
 Phoenix 4.2.1
Reporter: wuchengzhi
Priority: Critical

 I am making stablity test with phoenix 4.2.1 . But the regionserver became 
 very slow  after 4 hours , and i found some error log in the regionserver log 
 file.
 In this scenario,the cluster has 8 machines(128G ram, 24 cores , 48T disk). i 
 setup 2 regionserver in each pc (total 16 rs). 
 1. create 8 tables, each table contains an index from TEST_USER0 to 
 TEST_USER7.
 create table TEST_USER0 (id varchar primary key , attr1 varchar, attr2 
 varchar,attr3 varchar,attr4 varchar,attr5 varchar,attr6 integer,attr7 
 integer,attr8 integer,attr9 integer,attr10 integer )  
 DATA_BLOCK_ENCODING='FAST_DIFF',VERSIONS=1,BLOOMFILTER='ROW',COMPRESSION='LZ4',BLOCKSIZE
  = '65536',SALT_BUCKETS=32;
 create local index TEST_USER_INDEX0 on 
 TEST5.TEST_USER0(attr1,attr2,attr3,attr4,attr5,attr6,attr7,attr8,attr9,attr10);
 
 2.  deploy phoenix client each machine to upsert data to tables. ( client1 
 upsert into TEST_USER0 , client 2 upsert into TEST_USER1.)
 One phoenix client start 6 threads, and each thread upsert 10,000 rows in 
 a batch.  and each thread will upsert 500,000,000 in totally.
 8 clients ran in same time.
  the log as belowRunning 4 hours later,  threre were about 1,000,000,000 rows 
 in hbase,  and error occur  frequently at about running 4 hours and 50 
 minutes , and the rps became very slow , less than 10,000 (7, in normal) .
 2015-03-09 19:15:13,337 ERROR 
 [B.DefaultRpcServer.handler=2,queue=2,port=60022] parallel.BaseTaskRunner: 
 Found a failed task because: org.apache.hadoop.hbase.DoNotRetryIOException: 
 ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find cached index metadata. 
  key=-1715879467965695792 
 region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
 Index update failed
 java.util.concurrent.ExecutionException: 
 org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 2008 (INT10): ERROR 2008 
 (INT10): Unable to find cached index metadata.  key=-1715879467965695792 
 region=TEST5.TEST_USER6,\x08,1425881401238.aacbf69ea1156d403a4a54810cba15d6. 
 Index update failed
 at 
 com.google.common.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:289)
 at 
 com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:276)
 at 
 com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:111)
 at 
 org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
 at 
 org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
 at 
 org.apache.phoenix.hbase.index.builder.IndexBuildManager.getIndexUpdate(IndexBuildManager.java:140)
 at 
 org.apache.phoenix.hbase.index.Indexer.preBatchMutateWithExceptions(Indexer.java:274)
 at 
 org.apache.phoenix.hbase.index.Indexer.preBatchMutate(Indexer.java:203)
 at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$35.call(RegionCoprocessorHost.java:881)
 at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1522)
 at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1597)
 at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1554)
 at 
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preBatchMutate(RegionCoprocessorHost.java:877)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:2476)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2263)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2215)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2219)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doBatchOp(HRegionServer.java:4376)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.doNonAtomicRegionMutation(HRegionServer.java:3580)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3469)
 at 
 

[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread shuxiong
Github user shuxiong commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26134544
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Sorry, I mistakenly close this request. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-10 Thread Dumindu Buddhika (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dumindu Buddhika updated PHOENIX-1705:
--
Attachment: PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch

New patch for implementation of ARRAY_APPEND

[~jamestaylor]

For the problem I mentioned earlier, after some digging I found PDataType.pad 
method. That solved the problem. I hope that is the correct way to go about it. 
New patch is attached here. This fix passes the test case for CHAR array.


 implement ARRAY_APPEND built in function
 

 Key: PHOENIX-1705
 URL: https://issues.apache.org/jira/browse/PHOENIX-1705
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika
 Attachments: 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread shuxiong
Github user shuxiong closed the pull request at:

https://github.com/apache/phoenix/pull/40


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1723 fix bugs in PTinyint.isCoercibl...

2015-03-10 Thread shuxiong
GitHub user shuxiong opened a pull request:

https://github.com/apache/phoenix/pull/44

PHOENIX-1723 fix bugs in PTinyint.isCoercibleTo

Bugs is described in [1].

[1] https://issues.apache.org/jira/browse/PHOENIX-1723

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shuxiong/phoenix 4.3-quickfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/44.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #44


commit 16518469bd4778700983a007b20942d31311a34b
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T15:51:12Z

PHOENIX-1723 fix bugs in PTinyint.isCoercibleTo




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1723) PTinyint.isCoercibleTo fails to test -1 with type PTinyint

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355115#comment-14355115
 ] 

ASF GitHub Bot commented on PHOENIX-1723:
-

GitHub user shuxiong opened a pull request:

https://github.com/apache/phoenix/pull/44

PHOENIX-1723 fix bugs in PTinyint.isCoercibleTo

Bugs is described in [1].

[1] https://issues.apache.org/jira/browse/PHOENIX-1723

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shuxiong/phoenix 4.3-quickfix

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/44.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #44


commit 16518469bd4778700983a007b20942d31311a34b
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T15:51:12Z

PHOENIX-1723 fix bugs in PTinyint.isCoercibleTo




 PTinyint.isCoercibleTo fails to test -1 with type PTinyint
 --

 Key: PHOENIX-1723
 URL: https://issues.apache.org/jira/browse/PHOENIX-1723
 Project: Phoenix
  Issue Type: Bug
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye
 Fix For: 4.3


 When trying the following code,
 PTinyint.INSTANCE.isCoercibleTo(PTinyint.INSTANCE, (byte)-1)
 In 4.3, it returns false.
 I think it should return true.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1705) implement ARRAY_APPEND built in function

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355126#comment-14355126
 ] 

James Taylor commented on PHOENIX-1705:
---

That's the right function, [~Dumindux]. Good work.

Anyone out there have time to review this patch?

 implement ARRAY_APPEND built in function
 

 Key: PHOENIX-1705
 URL: https://issues.apache.org/jira/browse/PHOENIX-1705
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Dumindu Buddhika
Assignee: Dumindu Buddhika
 Attachments: 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function.patch, 
 PHOENIX-1705_implement_ARRAY_APPEND_built_in_function1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355132#comment-14355132
 ] 

James Taylor commented on PHOENIX-1709:
---

[~samarthjain] - I'd like to get this bug fix into the 4.3 branch. Are you ok 
with that?

 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1711) Improve performance of CSV loader

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355710#comment-14355710
 ] 

James Taylor commented on PHOENIX-1711:
---

You need to get the latest from the 4.0 branch in our git repo: 
https://git-wip-us.apache.org/repos/asf/phoenix.git
{code}
git clone https://git-wip-us.apache.org/repos/asf/phoenix.git
git checkout 4.0
git apply PHOENIX-1711_4.0.patch
mvn clean
mvn package -DskipTests
{code}

 Improve performance of CSV loader
 -

 Key: PHOENIX-1711
 URL: https://issues.apache.org/jira/browse/PHOENIX-1711
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Attachments: PHOENIX-1711.patch, PHOENIX-1711_4.0.patch


 Here is a break-up of percentage execution time for some of the steps inthe 
 mapper:
 csvParser: 18%
 csvUpsertExecutor.execute(ImmutableList.of(csvRecord)): 39%
 PhoenixRuntime.getUncommittedDataIterator(conn, true): 9%
 while (uncommittedDataIterator.hasNext()): 15%
 Read IO  custom processing: 19%
 See details here: http://s.apache.org/6rl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-39) Add sustained load tester that measures throughput

2015-03-10 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-39?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355576#comment-14355576
 ] 

Jesse Yates commented on PHOENIX-39:


+1 overall; this is massive though, so I'd want another committer who knows the 
added code more closely to also +1, i.e. [~mujtabachohan].

There are probably some nits we can cleanup, but that will likely be an 
as-we-go kind of thing.

 Add sustained load tester that measures throughput
 --

 Key: PHOENIX-39
 URL: https://issues.apache.org/jira/browse/PHOENIX-39
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor
Assignee: Cody Marcel

 We should add a YCSB-like [1] sustained load tester that measures throughput 
 over an extended time period for a fully loaded cluster using Phoenix. 
 Ideally, we'd want to be able to dial up/down the read/write percentages, and 
 control the types of queries being run (scan, aggregate, joins, array usage, 
 etc). Another interesting dimension is simultaneous users and on top of that 
 multi-tenant views.
 This would be a big effort, but we can stage it and increase the knobs and 
 dials as we go.
 [1] http://hbase.apache.org/book/apd.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355382#comment-14355382
 ] 

ASF GitHub Bot commented on PHOENIX-1715:
-

GitHub user shuxiong opened a pull request:

https://github.com/apache/phoenix/pull/45

PHOENIX-1715 Implement Build-in Math function sign

1. Add Build-in Math Function Sign
2. Add a new Type, PNumericType, which is superclass of all numeric type, 
such as PInteger, PFloat etc.
PNumericType has method getSign

   2.1 All integer types, PInteger, PLong etc., will compute sign result by 
checking the content bytes.
   2.2 All float types, PFloat, PDouble etc, will construct objects and 
compare with zero, for their complex content bytes.

I mistakenly close last pull request, sorry for that.

Thanks.
   

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shuxiong/phoenix 4.3-shuxiong-gsoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/45.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #45


commit 32089c5db6745aa710f579dcdae32b5fd2f03d51
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-09T18:13:39Z

PHOENIX-1715 Implement Build-in Math function sign

commit 21f7aafd10fc3b6052fe14859918f3649b6b6de2
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T15:28:59Z

PHOENIX-1715 add PNumericType, being all Integer DataType superclass

commit 4abe9c225754f1993eccb45094042c9e79016db6
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T18:05:10Z

PHOENIX-1715 add PNumericType, being all Float DataType superclass

commit 92c7a3a869e22baf24a5a615bde69fca80698e6d
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T18:18:59Z

code refinement




 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread Shuxiong Ye (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355410#comment-14355410
 ] 

Shuxiong Ye commented on PHOENIX-1715:
--


Integer Types byte-based sign function finish, while Float Types are a little 
complex.

A solution for float types will be

1. If sign bit is minus, just return -1
2. Compare the contents with pre-compute-zero-bytes. if they are same, return 0
3. Other case, return 1.

I will try it tomorrow.

 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1609) MR job to populate index tables

2015-03-10 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355523#comment-14355523
 ] 

maghamravikiran commented on PHOENIX-1609:
--

[~jamestaylor] 
   Thanks for the update. The tests in phoenix-pig module are failing primarily 
due to the escaping of column name that we have done.  From the stack trace 
attached in the earlier thread, we notice Pig is trying to look for a column 
name {code}SAL{code} but since we are internally having it as {code}SAL{code} 
 its failing to find the field. 
   
I am under the impression that you had earlier recommended to escape each 
column name internally to avoid issues in cases where we couldn't parse the 
string representation of ColumnInfo correctly when the column name had a : .   
Correct me if I am wrong here. 
 To address the issues , the tostring() method has been changed as below  and 
the splitting for the column name has been addressed by using  
split(STR_SEPARATOR,2) . 

{code}
  //prior to change
@Override
public String toString() {
return columnName  + STR_SEPARATOR + getPDataType().getSqlTypeName();  
}

//after the change
@Override
public String toString() {
return getPDataType().getSqlTypeName() + STR_SEPARATOR + columnName ;   
// we are now returning the sql type first and then the column name.
}
{code}


 The code in fromString()  splits the string representation of ColumnInfo 
correctly even in cases of a column name having a : as I have used 
   {code}
   ListString components =   
Lists.newArrayList(stringRepresentation.split(:,2));   // this splits on the 
first occurrence of  :  and no further
{code} 


If the goal is to have all column names escaped with a quote, I will work on 
fixing the issues on the phoenix-pig module end by un-escaping each column name 
before we do a handshake of passing the column names to Pig in the 
PhoenixPigSchemaUtil.java [1] 
[1] 
https://github.com/apache/phoenix/blob/master/phoenix-pig/src/main/java/org/apache/phoenix/pig/util/PhoenixPigSchemaUtil.java#L71
 


 MR job to populate index tables 
 

 Key: PHOENIX-1609
 URL: https://issues.apache.org/jira/browse/PHOENIX-1609
 Project: Phoenix
  Issue Type: New Feature
Reporter: maghamravikiran
Assignee: maghamravikiran
 Attachments: 0001-PHOENIX-1609-4.0.patch, 
 0001-PHOENIX-1609-4.0.patch, 0001-PHOENIX-1609-wip.patch, 
 0001-PHOENIX_1609.patch


 Often, we need to create new indexes on master tables way after the data 
 exists on the master tables.  It would be good to have a simple MR job given 
 by the phoenix code that users can call to have indexes in sync with the 
 master table. 
 Users can invoke the MR job using the following command 
 hadoop jar org.apache.phoenix.mapreduce.Index -st MASTER_TABLE -tt 
 INDEX_TABLE -columns a,b,c
 Is this ideal? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread shuxiong
GitHub user shuxiong opened a pull request:

https://github.com/apache/phoenix/pull/45

PHOENIX-1715 Implement Build-in Math function sign

1. Add Build-in Math Function Sign
2. Add a new Type, PNumericType, which is superclass of all numeric type, 
such as PInteger, PFloat etc.
PNumericType has method getSign

   2.1 All integer types, PInteger, PLong etc., will compute sign result by 
checking the content bytes.
   2.2 All float types, PFloat, PDouble etc, will construct objects and 
compare with zero, for their complex content bytes.

I mistakenly close last pull request, sorry for that.

Thanks.
   

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/shuxiong/phoenix 4.3-shuxiong-gsoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/45.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #45


commit 32089c5db6745aa710f579dcdae32b5fd2f03d51
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-09T18:13:39Z

PHOENIX-1715 Implement Build-in Math function sign

commit 21f7aafd10fc3b6052fe14859918f3649b6b6de2
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T15:28:59Z

PHOENIX-1715 add PNumericType, being all Integer DataType superclass

commit 4abe9c225754f1993eccb45094042c9e79016db6
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T18:05:10Z

PHOENIX-1715 add PNumericType, being all Float DataType superclass

commit 92c7a3a869e22baf24a5a615bde69fca80698e6d
Author: yesx yeshuxi...@gmail.com
Date:   2015-03-10T18:18:59Z

code refinement




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1691) Allow settting sampling rate while enabling tracing.

2015-03-10 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355415#comment-14355415
 ] 

Samarth Jain commented on PHOENIX-1691:
---

+1 to what James said. Also it would be good to add or modify an existing test 
to make sure trace sampler is set to NEVER and ALWAYS on calling TRACE OFF and 
TRACE ON respectively.

 Allow settting sampling rate while enabling tracing.
 

 Key: PHOENIX-1691
 URL: https://issues.apache.org/jira/browse/PHOENIX-1691
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0, 4.4

 Attachments: PHOENIX-1691.patch


 Now we can dynamically enable/disable tracing from query. We should also be 
 able to set sampling rate while enabling tracing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1711) Improve performance of CSV loader

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1711:
--
Attachment: PHOENIX-1711_4.0.patch

Try applying this patch to the latest on 4.0 branch like this:
{code}
git apply PHOENIX-1711_4.0.patch
{code}
Then you'll need to run mvn package followed by replacing you client and server 
jars with the ones build by the package command in phoenix-assembly/target/

 Improve performance of CSV loader
 -

 Key: PHOENIX-1711
 URL: https://issues.apache.org/jira/browse/PHOENIX-1711
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
 Attachments: PHOENIX-1711.patch, PHOENIX-1711_4.0.patch


 Here is a break-up of percentage execution time for some of the steps inthe 
 mapper:
 csvParser: 18%
 csvUpsertExecutor.execute(ImmutableList.of(csvRecord)): 39%
 PhoenixRuntime.getUncommittedDataIterator(conn, true): 9%
 while (uncommittedDataIterator.hasNext()): 15%
 Read IO  custom processing: 19%
 See details here: http://s.apache.org/6rl



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1609) MR job to populate index tables

2015-03-10 Thread maghamravikiran (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355568#comment-14355568
 ] 

maghamravikiran commented on PHOENIX-1609:
--

Perfect James!!  Sounds a plan. I will address this and push the code .

 MR job to populate index tables 
 

 Key: PHOENIX-1609
 URL: https://issues.apache.org/jira/browse/PHOENIX-1609
 Project: Phoenix
  Issue Type: New Feature
Reporter: maghamravikiran
Assignee: maghamravikiran
 Attachments: 0001-PHOENIX-1609-4.0.patch, 
 0001-PHOENIX-1609-4.0.patch, 0001-PHOENIX-1609-wip.patch, 
 0001-PHOENIX_1609.patch


 Often, we need to create new indexes on master tables way after the data 
 exists on the master tables.  It would be good to have a simple MR job given 
 by the phoenix code that users can call to have indexes in sync with the 
 master table. 
 Users can invoke the MR job using the following command 
 hadoop jar org.apache.phoenix.mapreduce.Index -st MASTER_TABLE -tt 
 INDEX_TABLE -columns a,b,c
 Is this ideal? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1609) MR job to populate index tables

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355542#comment-14355542
 ] 

James Taylor commented on PHOENIX-1609:
---

Sorry if I mislead you. I was just suggesting to surround with double quotes so 
that you'd have a character you could search for that wouldn't appear in the 
column name (b/c ':' can occur as could potentially '^'). I wasn't suggesting 
to leave the double quotes in there, but strip them back out as you're 
extracting the column name and type.

 MR job to populate index tables 
 

 Key: PHOENIX-1609
 URL: https://issues.apache.org/jira/browse/PHOENIX-1609
 Project: Phoenix
  Issue Type: New Feature
Reporter: maghamravikiran
Assignee: maghamravikiran
 Attachments: 0001-PHOENIX-1609-4.0.patch, 
 0001-PHOENIX-1609-4.0.patch, 0001-PHOENIX-1609-wip.patch, 
 0001-PHOENIX_1609.patch


 Often, we need to create new indexes on master tables way after the data 
 exists on the master tables.  It would be good to have a simple MR job given 
 by the phoenix code that users can call to have indexes in sync with the 
 master table. 
 Users can invoke the MR job using the following command 
 hadoop jar org.apache.phoenix.mapreduce.Index -st MASTER_TABLE -tt 
 INDEX_TABLE -columns a,b,c
 Is this ideal? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355567#comment-14355567
 ] 

James Taylor commented on PHOENIX-1577:
---

[~samarthjain] - do we (or can we) cover the above cases as well?

 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1719) CREATE VIEW ... AS SELECT DDL should allow aliases for the column(s) definition.

2015-03-10 Thread Serhiy Bilousov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serhiy Bilousov updated PHOENIX-1719:
-
Summary: CREATE VIEW ... AS SELECT DDL should allow aliases for the 
column(s) definition.  (was: CREATE VIEW ... AS SELECT DDL should allow to 
specify aliases for the columns.)

 CREATE VIEW ... AS SELECT DDL should allow aliases for the column(s) 
 definition.
 

 Key: PHOENIX-1719
 URL: https://issues.apache.org/jira/browse/PHOENIX-1719
 Project: Phoenix
  Issue Type: Improvement
Reporter: Serhiy Bilousov
Priority: Critical

 It would be very helpful to be able to specify aliases for the columns when 
 creating VIEW. It also would be beneficiary to have consistent GRAMMAR for 
 select statement in the select, create view (including as select), derived 
 tables, sub-queries  etc.
 This would not only bring PHOENIX SQL one little step closer to the ANSI SQL 
 standart but would also allow to bring well named column names to BI tools 
 and still stay with HBase best practices regarding minimal length of the 
 CQ/CD names.
 It should also allow to hide quoted CQ/CC behind the VIEW so user would not 
 need to think about what CC should be quoted and what should not.
 Here is how it looks in different RDMS
 {code:title=MS SQL|borderStyle=solid}
 // Some comments here
 CREATE VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ] 
 [ WITH view_attribute [ ,...n ] ] 
 AS select_statement 
 [ WITH CHECK OPTION ] 
 [ ; ]
 view_attribute ::= 
 {
 [ ENCRYPTION ]
 [ SCHEMABINDING ]
 [ VIEW_METADATA ] 
 } 
 {code}
 {code:title=Postgrsql|borderStyle=solid}
 [ WITH [ RECURSIVE ] with_query [, ...] ]
 SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
 [ * | expression [ [ AS ] output_name ] [, ...] ]
 [ FROM from_item [, ...] ]
 [ WHERE condition ]
 [ GROUP BY expression [, ...] ]
 [ HAVING condition [, ...] ]
 [ WINDOW window_name AS ( window_definition ) [, ...] ]
 [ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ]
 [ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | 
 LAST } ] [, ...] ]
 [ LIMIT { count | ALL } ]
 [ OFFSET start [ ROW | ROWS ] ]
 [ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
 [ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF table_name [, 
 ...] ] [ NOWAIT ] [...] ]
 where from_item can be one of:
 [ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
 [ LATERAL ] ( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
 with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
 [ LATERAL ] function_name ( [ argument [, ...] ] )
 [ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) 
 ] ]
 [ LATERAL ] function_name ( [ argument [, ...] ] ) [ AS ] alias ( 
 column_definition [, ...] )
 [ LATERAL ] function_name ( [ argument [, ...] ] ) AS ( column_definition 
 [, ...] )
 [ LATERAL ] ROWS FROM( function_name ( [ argument [, ...] ] ) [ AS ( 
 column_definition [, ...] ) ] [, ...] )
 [ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) 
 ] ]
 from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( 
 join_column [, ...] ) ]
 and with_query is:
 with_query_name [ ( column_name [, ...] ) ] AS ( select | values | insert 
 | update | delete )
 TABLE [ ONLY ] table_name [ * ]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1721) Add DUAL table or alike functionality

2015-03-10 Thread Serhiy Bilousov (JIRA)
Serhiy Bilousov created PHOENIX-1721:


 Summary: Add DUAL table or alike functionality
 Key: PHOENIX-1721
 URL: https://issues.apache.org/jira/browse/PHOENIX-1721
 Project: Phoenix
  Issue Type: Improvement
Reporter: Serhiy Bilousov
Priority: Critical


The DUAL table is a special one-row, one-column table present by default in 
Oracle and other database installations. In Oracle, the table has a single 
VARCHAR2(1) column called DUMMY that has a value of 'X'. It is suitable for use 
in selecting a pseudo column such as SYSDATE or USER.

In MS SQL you can do just do 
SELEC getdate() or SELECT 'x' AS DUMMY  without FROM 

Something like that would be very helpful to really bring back SQL to noSQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1721) Add DUAL table or alike functionality

2015-03-10 Thread Serhiy Bilousov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Serhiy Bilousov updated PHOENIX-1721:
-
Description: 
The DUAL table is a special one-row, one-column table present by default in 
Oracle and other database installations. In Oracle, the table has a single 
VARCHAR2(1) column called DUMMY that has a value of 'X'. It is suitable for use 
in selecting a pseudo column such as SYSDATE or USER.

In MS SQL you can do just do 
SELECT getdate() or SELECT 'x' AS DUMMY  without FROM 

Something like that would be very helpful to really bring back SQL to noSQL.

  was:
The DUAL table is a special one-row, one-column table present by default in 
Oracle and other database installations. In Oracle, the table has a single 
VARCHAR2(1) column called DUMMY that has a value of 'X'. It is suitable for use 
in selecting a pseudo column such as SYSDATE or USER.

In MS SQL you can do just do 
SELEC getdate() or SELECT 'x' AS DUMMY  without FROM 

Something like that would be very helpful to really bring back SQL to noSQL.


 Add DUAL table or alike functionality
 -

 Key: PHOENIX-1721
 URL: https://issues.apache.org/jira/browse/PHOENIX-1721
 Project: Phoenix
  Issue Type: Improvement
Reporter: Serhiy Bilousov
Priority: Critical

 The DUAL table is a special one-row, one-column table present by default in 
 Oracle and other database installations. In Oracle, the table has a single 
 VARCHAR2(1) column called DUMMY that has a value of 'X'. It is suitable for 
 use in selecting a pseudo column such as SYSDATE or USER.
 In MS SQL you can do just do 
 SELECT getdate() or SELECT 'x' AS DUMMY  without FROM 
 Something like that would be very helpful to really bring back SQL to noSQL.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1709:
--
Attachment: PHOENIX-1709_v5.patch

Thanks for the help on this on, [~daniel.M]. Here's a new patch that addresses 
part (2). Please let me know how it goes with this one. I'm running all the 
unit tests now.

 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1712) Implementing INSTR function

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1712:
--
Assignee: Naveen Madhire

 Implementing INSTR function
 ---

 Key: PHOENIX-1712
 URL: https://issues.apache.org/jira/browse/PHOENIX-1712
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Naveen Madhire
Assignee: Naveen Madhire
Priority: Minor
   Original Estimate: 40h
  Remaining Estimate: 40h

 This is sub-task to implement a custom INSTR function just like in Oracle.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1715:
--
Assignee: Shuxiong Ye

 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1719) CREATE VIEW ... AS SELECT DDL should allow to specify aliases for the columns.

2015-03-10 Thread Serhiy Bilousov (JIRA)
Serhiy Bilousov created PHOENIX-1719:


 Summary: CREATE VIEW ... AS SELECT DDL should allow to specify 
aliases for the columns.
 Key: PHOENIX-1719
 URL: https://issues.apache.org/jira/browse/PHOENIX-1719
 Project: Phoenix
  Issue Type: Improvement
Reporter: Serhiy Bilousov
Priority: Critical


It would be very helpful to be able to specify aliases for the columns when 
creating VIEW. It also would be beneficiary to have consistent GRAMMAR for 
select statement in the select, create view (including as select), derived 
tables, sub-queries  etc.

This would not only bring PHOENIX SQL one little step closer to the ANSI SQL 
standart but would also allow to bring well named column names to BI tools and 
still stay with HBase best practices regarding minimal length of the CQ/CD 
names.

It should also allow to hide quoted CQ/CC behind the VIEW so user would not 
need to think about what CC should be quoted and what should not.

Here is how it looks in different RDMS
{code:title=MS SQL|borderStyle=solid}
// Some comments here
CREATE VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ] 
[ WITH view_attribute [ ,...n ] ] 
AS select_statement 
[ WITH CHECK OPTION ] 
[ ; ]

view_attribute ::= 
{
[ ENCRYPTION ]
[ SCHEMABINDING ]
[ VIEW_METADATA ] 
} 
{code}

{code:title=Postgrsql|borderStyle=solid}
[ WITH [ RECURSIVE ] with_query [, ...] ]
SELECT [ ALL | DISTINCT [ ON ( expression [, ...] ) ] ]
[ * | expression [ [ AS ] output_name ] [, ...] ]
[ FROM from_item [, ...] ]
[ WHERE condition ]
[ GROUP BY expression [, ...] ]
[ HAVING condition [, ...] ]
[ WINDOW window_name AS ( window_definition ) [, ...] ]
[ { UNION | INTERSECT | EXCEPT } [ ALL | DISTINCT ] select ]
[ ORDER BY expression [ ASC | DESC | USING operator ] [ NULLS { FIRST | 
LAST } ] [, ...] ]
[ LIMIT { count | ALL } ]
[ OFFSET start [ ROW | ROWS ] ]
[ FETCH { FIRST | NEXT } [ count ] { ROW | ROWS } ONLY ]
[ FOR { UPDATE | NO KEY UPDATE | SHARE | KEY SHARE } [ OF table_name [, 
...] ] [ NOWAIT ] [...] ]

where from_item can be one of:

[ ONLY ] table_name [ * ] [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
[ LATERAL ] ( select ) [ AS ] alias [ ( column_alias [, ...] ) ]
with_query_name [ [ AS ] alias [ ( column_alias [, ...] ) ] ]
[ LATERAL ] function_name ( [ argument [, ...] ] )
[ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] 
]
[ LATERAL ] function_name ( [ argument [, ...] ] ) [ AS ] alias ( 
column_definition [, ...] )
[ LATERAL ] function_name ( [ argument [, ...] ] ) AS ( column_definition 
[, ...] )
[ LATERAL ] ROWS FROM( function_name ( [ argument [, ...] ] ) [ AS ( 
column_definition [, ...] ) ] [, ...] )
[ WITH ORDINALITY ] [ [ AS ] alias [ ( column_alias [, ...] ) ] 
]
from_item [ NATURAL ] join_type from_item [ ON join_condition | USING ( 
join_column [, ...] ) ]

and with_query is:

with_query_name [ ( column_name [, ...] ) ] AS ( select | values | insert | 
update | delete )

TABLE [ ONLY ] table_name [ * ]
{code}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1720) Add 'OR REPLACE to the CREATE ... DDL statement.

2015-03-10 Thread Serhiy Bilousov (JIRA)
Serhiy Bilousov created PHOENIX-1720:


 Summary: Add 'OR REPLACE to the CREATE ... DDL statement.
 Key: PHOENIX-1720
 URL: https://issues.apache.org/jira/browse/PHOENIX-1720
 Project: Phoenix
  Issue Type: Improvement
Reporter: Serhiy Bilousov
Priority: Critical


Some good RDBMS has CREATE OR REPLACE ddl which is very useful and eliminates 
some extra codding to check for object existence.

PHOENIX should not be among thouse that do not have such nice feature (like MS 
SQL for example :)) 








--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1577:
--
Attachment: PHOENIX-1577_v2.patch

Thanks for checking [~jleech]. Attached is the updated patch that fixes the 
getTimestamp() part along with the test. 

 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch, PHOENIX-1577_v2.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reopened PHOENIX-1577:
---

 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch, PHOENIX-1577_v2.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1577) java.lang.IllegalArgumentException: nanos 999999999 or 0 while use Calendar

2015-03-10 Thread Jonathan Leech (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14355869#comment-14355869
 ] 

Jonathan Leech commented on PHOENIX-1577:
-

Looks good, thanks!

 java.lang.IllegalArgumentException: nanos  9 or  0 while use 
 Calendar
 ---

 Key: PHOENIX-1577
 URL: https://issues.apache.org/jira/browse/PHOENIX-1577
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.2
Reporter: Kylin Soong
Assignee: Samarth Jain
 Fix For: 5.0.0, 4.3.1, 4.4

 Attachments: PHOENIX-1577.patch, PHOENIX-1577_v2.patch


 I use the link [1] code, there always nanos  9 or  0 error throw.
 If execute insert, the error looks like:
 ~~~
 Exception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.setTimestamp(PhoenixPreparedStatement.java:489)
 ~~~
 and select the error looks
 ~~~
 xception in thread main java.lang.IllegalArgumentException: nanos  
 9 or  0
   at java.sql.Timestamp.setNanos(Timestamp.java:386)
   at org.apache.phoenix.util.DateUtil.getTimestamp(DateUtil.java:142)
   at 
 org.apache.phoenix.jdbc.PhoenixResultSet.getTimestamp(PhoenixResultSet.java:638)
 ~~~
 Does this can be a bug?
 [1] 
 https://github.com/kylinsoong/data/blob/master/phoenix-quickstart/src/test/java/org/apache/phoenix/examples/BugReproduce.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1715) Implement Build-in math function Sign

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354490#comment-14354490
 ] 

ASF GitHub Bot commented on PHOENIX-1715:
-

Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26103663
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Nice work! I think performance of this can be improved by not having to 
create a BigDecimal in evaluate. One way would be to introduce a 
PDataType.getSign() method that would work directly from the numeric type of 
the child (i.e. long, int, short, byte). You'd need to maybe throw in cases 
where getSign doesn't make sense, like with VARCHAR or CHAR. Another way would 
be to introduce a PNumericType and introduce the getSign method there. Then 
reparent all the numeric PDataType to this new class. You'd know you could cast 
the childExpr.getDataType() method to PNumericType because we guarantee that at 
compile time.


 Implement Build-in math function Sign
 -

 Key: PHOENIX-1715
 URL: https://issues.apache.org/jira/browse/PHOENIX-1715
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Shuxiong Ye
Assignee: Shuxiong Ye

 Take a look at the typical math functions that are implemented in relational 
 database systems 
 (http://www.postgresql.org/docs/current/static/functions-math.html) and 
 implement the same for Phoenix in Java following this guide: 
 http://phoenix-hbase.blogspot.com/2013/04/how-to-add-your-own-built-in-function.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1715 Implement Build-in Math functio...

2015-03-10 Thread JamesRTaylor
Github user JamesRTaylor commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/40#discussion_r26103663
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/expression/function/SignFunction.java
 ---
@@ -0,0 +1,77 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * License); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an AS IS BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.expression.function;
+
+import java.math.BigDecimal;
+import java.sql.SQLException;
+import java.util.List;
+
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.phoenix.expression.Expression;
+import org.apache.phoenix.parse.FunctionParseNode.Argument;
+import org.apache.phoenix.parse.FunctionParseNode.BuiltInFunction;
+import org.apache.phoenix.schema.tuple.Tuple;
+import org.apache.phoenix.schema.types.PDataType;
+import org.apache.phoenix.schema.types.PDecimal;
+import org.apache.phoenix.schema.types.PInteger;
+
+/**
+ *
+ * Base class for built-in SIGN function.
+ *
+ * @since 4.3.0
+ *
+ */
+@BuiltInFunction(name = SignFunction.NAME,
+ args = {
+@Argument(allowedTypes={PDecimal.class})
+}
+)
+public class SignFunction extends ScalarFunction {
--- End diff --

Nice work! I think performance of this can be improved by not having to 
create a BigDecimal in evaluate. One way would be to introduce a 
PDataType.getSign() method that would work directly from the numeric type of 
the child (i.e. long, int, short, byte). You'd need to maybe throw in cases 
where getSign doesn't make sense, like with VARCHAR or CHAR. Another way would 
be to introduce a PNumericType and introduce the getSign method there. Then 
reparent all the numeric PDataType to this new class. You'd know you could cast 
the childExpr.getDataType() method to PNumericType because we guarantee that at 
compile time.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-10 Thread Vaclav Loffelmann (JIRA)
Vaclav Loffelmann created PHOENIX-1722:
--

 Summary: Speedup CONVERT_TZ function
 Key: PHOENIX-1722
 URL: https://issues.apache.org/jira/browse/PHOENIX-1722
 Project: Phoenix
  Issue Type: Improvement
Reporter: Vaclav Loffelmann
Assignee: Vaclav Loffelmann
Priority: Minor


We have use case sensitive to performance of this function and I'd like to 
benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1709) And expression of primary key RVCs can not compile

2015-03-10 Thread daniel meng (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354526#comment-14354526
 ] 

daniel meng commented on PHOENIX-1709:
--

it works fine for me, thanks~ [~jamestaylor]

 And expression of primary key RVCs can not compile
 --

 Key: PHOENIX-1709
 URL: https://issues.apache.org/jira/browse/PHOENIX-1709
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Attachments: PHOENIX-1709.patch, PHOENIX-1709_v2.patch, 
 PHOENIX-1709_v3.patch, PHOENIX-1709_v4.patch, PHOENIX-1709_v5.patch


   1 . create table t (a integer not null, b integer not null, c integer
 constraint pk primary key (a,b));
   2. select c from t where a in (1,2) and b = 3 and (a,b) in ( (1,2) , (1,3));
   I got exception on compile :
   java.lang.IllegalArgumentException
at
 com.google.common.base.Preconditions.checkArgument(Preconditions.java:76)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor$KeySlot.inter
 sect(WhereOptimizer.java:955)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.intersectSlot
 s(WhereOptimizer.java:506)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.andKeySlots(W
 hereOptimizer.java:551)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:725)
at
 org.apache.phoenix.compile.WhereOptimizer$KeyExpressionVisitor.visitLeave(Wh
 ereOptimizer.java:349)
at
 org.apache.phoenix.expression.AndExpression.accept(AndExpression.java:100)
at
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOpti
 mizer.java:117)
at
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.ja
 va:324)
at
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:132)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:296)
at
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePl
 an(PhoenixStatement.java:284)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:208)
at
 org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:204)
at
 org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.j
 ava:54)
at
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:
 204)
at
 org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:967)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3673)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] phoenix pull request: PHOENIX-1722 Speedup CONVERT_TZ function

2015-03-10 Thread tzolkincz
GitHub user tzolkincz opened a pull request:

https://github.com/apache/phoenix/pull/43

PHOENIX-1722 Speedup CONVERT_TZ function

Using Joda Time lib instead of java.util.TimeZone. This would speedup this 
function more than 3 times. I've also updated version of Joda Time to 2.7 cause 
of some timezone bugfixes and speedups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzolkincz/phoenix convert_tz_speedup_4.3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 275b34706960c3052374a671bfe2a496470df648
Author: Vaclav Loffelmann vaclav.loffelm...@socialbakers.com
Date:   2015-03-09T14:50:22Z

PHOENIX-1722 Speedup CONVERT_TZ function




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-1722 Speedup CONVERT_TZ function

2015-03-10 Thread tzolkincz
GitHub user tzolkincz opened a pull request:

https://github.com/apache/phoenix/pull/42

PHOENIX-1722 Speedup CONVERT_TZ function

Using Joda Time lib instead of java.util.TimeZone. This would speedup this 
function more than 3 times. I've also updated version of Joda Time to 2.7 cause 
of some timezone bugfixes and speedups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzolkincz/phoenix convert_tz_speedup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/42.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #42


commit 8d8ec2e727af327196a2af8803c6676f48c84a63
Author: Vaclav Loffelmann vaclav.loffelm...@socialbakers.com
Date:   2015-03-09T14:50:22Z

PHOENIX-1722 Speedup CONVERT_TZ function




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354570#comment-14354570
 ] 

ASF GitHub Bot commented on PHOENIX-1722:
-

GitHub user tzolkincz opened a pull request:

https://github.com/apache/phoenix/pull/42

PHOENIX-1722 Speedup CONVERT_TZ function

Using Joda Time lib instead of java.util.TimeZone. This would speedup this 
function more than 3 times. I've also updated version of Joda Time to 2.7 cause 
of some timezone bugfixes and speedups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzolkincz/phoenix convert_tz_speedup

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/42.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #42


commit 8d8ec2e727af327196a2af8803c6676f48c84a63
Author: Vaclav Loffelmann vaclav.loffelm...@socialbakers.com
Date:   2015-03-09T14:50:22Z

PHOENIX-1722 Speedup CONVERT_TZ function




 Speedup CONVERT_TZ function
 ---

 Key: PHOENIX-1722
 URL: https://issues.apache.org/jira/browse/PHOENIX-1722
 Project: Phoenix
  Issue Type: Improvement
Reporter: Vaclav Loffelmann
Assignee: Vaclav Loffelmann
Priority: Minor

 We have use case sensitive to performance of this function and I'd like to 
 benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1722) Speedup CONVERT_TZ function

2015-03-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354571#comment-14354571
 ] 

ASF GitHub Bot commented on PHOENIX-1722:
-

GitHub user tzolkincz opened a pull request:

https://github.com/apache/phoenix/pull/43

PHOENIX-1722 Speedup CONVERT_TZ function

Using Joda Time lib instead of java.util.TimeZone. This would speedup this 
function more than 3 times. I've also updated version of Joda Time to 2.7 cause 
of some timezone bugfixes and speedups.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tzolkincz/phoenix convert_tz_speedup_4.3

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/43.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #43


commit 275b34706960c3052374a671bfe2a496470df648
Author: Vaclav Loffelmann vaclav.loffelm...@socialbakers.com
Date:   2015-03-09T14:50:22Z

PHOENIX-1722 Speedup CONVERT_TZ function




 Speedup CONVERT_TZ function
 ---

 Key: PHOENIX-1722
 URL: https://issues.apache.org/jira/browse/PHOENIX-1722
 Project: Phoenix
  Issue Type: Improvement
Reporter: Vaclav Loffelmann
Assignee: Vaclav Loffelmann
Priority: Minor

 We have use case sensitive to performance of this function and I'd like to 
 benefit from using joda time lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1609) MR job to populate index tables

2015-03-10 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14354582#comment-14354582
 ] 

James Taylor commented on PHOENIX-1609:
---

You need to change your fromString logic to find the next double quote char 
instead of splitting on ':', as the column name may contain a ':'.

 MR job to populate index tables 
 

 Key: PHOENIX-1609
 URL: https://issues.apache.org/jira/browse/PHOENIX-1609
 Project: Phoenix
  Issue Type: New Feature
Reporter: maghamravikiran
Assignee: maghamravikiran
 Attachments: 0001-PHOENIX-1609-4.0.patch, 
 0001-PHOENIX-1609-4.0.patch, 0001-PHOENIX-1609-wip.patch, 
 0001-PHOENIX_1609.patch


 Often, we need to create new indexes on master tables way after the data 
 exists on the master tables.  It would be good to have a simple MR job given 
 by the phoenix code that users can call to have indexes in sync with the 
 master table. 
 Users can invoke the MR job using the following command 
 hadoop jar org.apache.phoenix.mapreduce.Index -st MASTER_TABLE -tt 
 INDEX_TABLE -columns a,b,c
 Is this ideal? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1287) Use the joni byte[] regex engine in place of j.u.regex

2015-03-10 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-1287:
--
Labels: gsoc2015  (was: )

 Use the joni byte[] regex engine in place of j.u.regex
 --

 Key: PHOENIX-1287
 URL: https://issues.apache.org/jira/browse/PHOENIX-1287
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
  Labels: gsoc2015

 See HBASE-11907. We'd get a 2x perf benefit plus it's driven off of byte[] 
 instead of strings.Thanks for the pointer, [~apurtell].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1677) Immutable index deadlocks when number of guideposts are one half of thread pool size

2015-03-10 Thread Mujtaba Chohan (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1677?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mujtaba Chohan updated PHOENIX-1677:

Attachment: PHOENIX-1677_utest.patch

[~jamestaylor] Attached PHOENIX-1677_utest.patch with 
testIndexCreationDeadlockWithStats unit test.

 Immutable index deadlocks when number of guideposts are one half of thread 
 pool size
 

 Key: PHOENIX-1677
 URL: https://issues.apache.org/jira/browse/PHOENIX-1677
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.3
Reporter: Mujtaba Chohan
Assignee: James Taylor
 Attachments: PHOENIX-1677_utest.patch, PHOENIX-1677_v2.patch


 If total number of parallel iterators + mutation count exceeds Phoenix thread 
 pool size then immutable index remains at 0 rows with no activity on client 
 and server.
 Ex. Only with guide post count of 64 or lower does immutable index gets build 
 with default 128 thread pool. By changing guide post width to get with 64 
 guideposts, index fails to build. Also if thread pool size is lowered from 
 128 then index fails to build for 64 guide posts.
 Let me know if you need a utest as I think it would be easy to repro in it 
 too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)