[ANNOUNCE] CFP open for ApacheCon North America 2016

2015-11-25 Thread Rich Bowen
Community growth starts by talking with those interested in your
project. ApacheCon North America is coming, are you?

We are delighted to announce that the Call For Presentations (CFP) is
now open for ApacheCon North America. You can submit your proposed
sessions at
http://events.linuxfoundation.org/events/apache-big-data-north-america/program/cfp
for big data talks and
http://events.linuxfoundation.org/events/apachecon-north-america/program/cfp
for all other topics.

ApacheCon North America will be held in Vancouver, Canada, May 9-13th
2016. ApacheCon has been running every year since 2000, and is the place
to build your project communities.

While we will consider individual talks we prefer to see related
sessions that are likely to draw users and community members. When
submitting your talk work with your project community and with related
communities to come up with a full program that will walk attendees
through the basics and on into mastery of your project in example use
cases. Content that introduces what's new in your latest release is also
of particular interest, especially when it builds upon existing well
know application models. The goal should be to showcase your project in
ways that will attract participants and encourage engagement in your
community, Please remember to involve your whole project community (user
and dev lists) when building content. This is your chance to create a
project specific event within the broader ApacheCon conference.

Content at ApacheCon North America will be cross-promoted as
mini-conferences, such as ApacheCon Big Data, and ApacheCon Mobile, so
be sure to indicate which larger category your proposed sessions fit into.

Finally, please plan to attend ApacheCon, even if you're not proposing a
talk. The biggest value of the event is community building, and we count
on you to make it a place where your project community is likely to
congregate, not just for the technical content in sessions, but for
hackathons, project summits, and good old fashioned face-to-face networking.

-- 
rbo...@apache.org
http://apache.org/


[GitHub] phoenix pull request: PHOENIX-1734 Local index improvements

2015-11-25 Thread chrajeshbabu
GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/135

PHOENIX-1734 Local index improvements

Patch supports storing local indexing data in the same data table.
1) Removed code used HBase internals in balancer, split and merge.
2) Create index create column families suffix with L#  for data column 
families.
3) Changes in read and write path to use column families prefixed with L# 
for local indexes.
4) Done changes to tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #135


commit 4e663a2479adbf3e41826f40c1b2ed6bb69d7634
Author: Rajeshbabu Chintaguntla 
Date:   2015-11-25T16:33:33Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1734) Local index improvements

2015-11-25 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027077#comment-15027077
 ] 

ASF GitHub Bot commented on PHOENIX-1734:
-

GitHub user chrajeshbabu opened a pull request:

https://github.com/apache/phoenix/pull/135

PHOENIX-1734 Local index improvements

Patch supports storing local indexing data in the same data table.
1) Removed code used HBase internals in balancer, split and merge.
2) Create index create column families suffix with L#  for data column 
families.
3) Changes in read and write path to use column families prefixed with L# 
for local indexes.
4) Done changes to tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chrajeshbabu/phoenix master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/phoenix/pull/135.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #135


commit 4e663a2479adbf3e41826f40c1b2ed6bb69d7634
Author: Rajeshbabu Chintaguntla 
Date:   2015-11-25T16:33:33Z

PHOENIX-1734 Local index improvements(Rajeshbabu)




> Local index improvements
> 
>
> Key: PHOENIX-1734
> URL: https://issues.apache.org/jira/browse/PHOENIX-1734
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Attachments: PHOENI-1734-WIP.patch, PHOENIX-1734_v1.patch, 
> PHOENIX-1734_v4.patch, TestAtomicLocalIndex.java
>
>
> Local index design considerations: 
>  1. Colocation: We need to co-locate regions of local index regions and data 
> regions. The co-location can be a hard guarantee or a soft (best approach) 
> guarantee. The co-location is a performance requirement, and also maybe 
> needed for consistency(2). Hard co-location means that either both the data 
> region and index region are opened atomically, or neither of them open for 
> serving. 
>  2. Index consistency : Ideally we want the index region and data region to 
> have atomic updates. This means that they should either (a)use transactions, 
> or they should (b)share the same WALEdit and also MVCC for visibility. (b) is 
> only applicable if there is hard colocation guarantee. 
>  3. Local index clients : How the local index will be accessed from clients. 
> In case of the local index being managed in a table, the HBase client can be 
> used for doing scans, etc. If the local index is hidden inside the data 
> regions, there has to be a different mechanism to access the data through the 
> data region. 
> With the above considerations, we imagine three possible implementation for 
> the local index solution, each detailed below. 
> APPROACH 1:  Current approach
> (1) Current approach uses balancer as a soft guarantee. Because of this, in 
> some rare cases, colocation might not happen. 
> (2) The index and data regions do not share the same WALEdits. Meaning 
> consistency cannot be achieved. Also there are two WAL writes per write from 
> client. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table. 
> APPROACH 2: Shadow regions + shared WAL & MVCC 
> (1) Introduce a shadow regions concept in HBase. Shadow regions are not 
> assigned by AM. Phoenix implements atomic open (and split/merge) of region 
> opening for data regions and index regions so that hard co-location is 
> guaranteed. 
> (2) For consistency requirements, the index regions and data regions will 
> share the same WALEdit (and thus recovery) and they will also share the same 
> MVCC mechanics so that index update and data update is visible atomically. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table.  
> APPROACH 3: Storing index data in separate column families in the table.
>  (1) Regions will have store files for cfs, which is sorted using the primary 
> sort order. Regions may also maintain stores, sorted in secondary sort 
> orders. This approach is similar in vein how a RDBMS keeps data (a B-TREE in 
> primary sort order and multiple B-TREEs in secondary sort orders with 
> pointers to primary key). That means store the index data in separate column 
> families in the data region. This way a region is extended to be more similar 
> to a RDBMS (but LSM instead of BTree). This is sometimes called shadow cf’s 
> as well. This approach guarantees hard co-location.
>  (2) Since everything is in a single region, they automatically share the 
> same WALEdit and MVCC numbers. Atomicity is easily achieved. 
>  (3) Current Phoenix implementation need to change in such a way that column 
> families selection in read/write path is based 

[jira] [Commented] (PHOENIX-2458) Fix transactional tests in MutableIndexFailureIT

2015-11-25 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027823#comment-15027823
 ] 

James Taylor commented on PHOENIX-2458:
---

+1. FWIW, feel free to check in test fixes like this without waiting for a +1 
so we can get our builds passing again.

> Fix transactional tests in MutableIndexFailureIT
> 
>
> Key: PHOENIX-2458
> URL: https://issues.apache.org/jira/browse/PHOENIX-2458
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-2458.patch
>
>
> Figure out why the transctional tests in MutableIndexFailureIT cause issues 
> with zookeeper which cause other tests to fail.
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: 
> org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
> ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: Can't get 
> connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2444) Expansion of derived table query works incorrectly with aggregate over order by

2015-11-25 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-2444.
--
   Resolution: Fixed
Fix Version/s: 4.7.0

> Expansion of derived table query works incorrectly with aggregate over order 
> by
> ---
>
> Key: PHOENIX-2444
> URL: https://issues.apache.org/jira/browse/PHOENIX-2444
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2444.patch
>
>
> Example:
> "select max(a) from (select * from t order by b) as t0"
> was rewritten wrongly as
> "select max(a) from t order by b"
> while it should be equivalent to
> "select max(a) from (select * from t) as t0"
> and should be rewritten as
> "select max(a) from t".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2454) Upsert with Double.NaN returns NumberFormatException

2015-11-25 Thread alex kamil (JIRA)
alex kamil created PHOENIX-2454:
---

 Summary: Upsert with Double.NaN returns NumberFormatException
 Key: PHOENIX-2454
 URL: https://issues.apache.org/jira/browse/PHOENIX-2454
 Project: Phoenix
  Issue Type: Bug
Reporter: alex kamil
Priority: Minor


When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2428) Disable writing to WAL on initial index population

2015-11-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027336#comment-15027336
 ] 

Enis Soztutar commented on PHOENIX-2428:


bq. To provide reliable indication of global success requires ProcedureV2 (1.2 
and up)
Even with a reliable execution of flush, we do not have any guarantees, no? I 
mean with {{Durability.SKIP_WAL}}.

The problem is that, the RS failover is transparent to the client. For example, 
if a region server receives {{a}} and {{b}} then fails, there will not be 
recovery and the region will open again without any issues. The client will 
just retry the requests and the region will start accepting {{c}} and {{d}}. A 
flush coming in after a while will write {{c}} and {{d}} to disk, but {{a}} and 
{{b}} is gone and there is no way to tell from the client side. 

I think we can design a mechanism in HBase itself for exposing an API to be 
something like: 
 (1) Start writes with SKIP_WAL
 (2) Either flush all the data previously written or fail. 

Roughly, the region can track durably whether a failure happened in between (1) 
and (2). Spitballing, I can see how that kind of API would be useful for a lot 
of other cases as well (for example kafka -> storm -> HBase) where the client 
can replay the data from the checkpoint. 

> Disable writing to WAL on initial index population
> --
>
> Key: PHOENIX-2428
> URL: https://issues.apache.org/jira/browse/PHOENIX-2428
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Ravi Kishore Valeti
>
> We should not be writing the WAL when we initially populate an index. Not 
> only is this obviously more efficient, but it also will prevent the writes 
> from being replicated both of which are good.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1734) Local index improvements

2015-11-25 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027269#comment-15027269
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-1734:
--

[~jamestaylor] [~enis] created pull 
request(https://github.com/apache/phoenix/pull/135) by handling the feedback. 
Please review.

bq. Would you mind adding some code comments here? What’s the reason to not 
write to the WAL here? Also, what’s the reason for the check on the table name? 
And what’s the reason we need the allowLocalUpdates? Is that because local 
index writes will always be local?
Added the comments.

bq. Instead of having a bunch of if statements to prefix local index column 
families, let’s just create them like this in the beginning. The other reason 
to do this is that we don’t want to be creating new byte arrays per row if 
don’t have to. We already have a simple algorithm to generate column names when 
indexes are used. This would be similar for column family names of local 
indexes. See IndexStatementRewriter.visit(ColumnParseNode node) which is used 
to translate a query against the data table to a query against an index table 
(and that method translates column references and adds a CAST to ensure the 
type stays the same), MetaDataClient.createIndex() (which you hopefully won’t 
need to change), and in particular IndexUtil.getIndexColumnName(String cf, 
String cn) which takes the data column family and column name and returns the 
index column name.
Removed bunch code of prefixing local index column family prefix and just 
changed MetaDataClient#createIndex to create local index with prefixed column 
families so write path and scan pick proper column families without much 
changes. Added some static methods to get local index column family from data 
column family and vice versa. And also added a map of actual covered columns to 
local index covered columns in index maintainer and used when ever required.

bq. The code in IndexMaintainer is not getting the correct array from the cell. 
It’s already been fixed in the 4.x and master branches, so please be careful 
when re-basing.
Not changed this part.

bq. Remove commented out code
Done.

bq. can you use a different, more unusual local index column family prefix, 
like "L#" ?
Changed it to L#.

For making both data and local index updates transactional not able to use 
[~enis] suggestions becuase for we cannot add deletes of different in preDelete 
but puts are fine. Will look at this later and work on.

> Local index improvements
> 
>
> Key: PHOENIX-1734
> URL: https://issues.apache.org/jira/browse/PHOENIX-1734
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
> Attachments: PHOENI-1734-WIP.patch, PHOENIX-1734_v1.patch, 
> PHOENIX-1734_v4.patch, TestAtomicLocalIndex.java
>
>
> Local index design considerations: 
>  1. Colocation: We need to co-locate regions of local index regions and data 
> regions. The co-location can be a hard guarantee or a soft (best approach) 
> guarantee. The co-location is a performance requirement, and also maybe 
> needed for consistency(2). Hard co-location means that either both the data 
> region and index region are opened atomically, or neither of them open for 
> serving. 
>  2. Index consistency : Ideally we want the index region and data region to 
> have atomic updates. This means that they should either (a)use transactions, 
> or they should (b)share the same WALEdit and also MVCC for visibility. (b) is 
> only applicable if there is hard colocation guarantee. 
>  3. Local index clients : How the local index will be accessed from clients. 
> In case of the local index being managed in a table, the HBase client can be 
> used for doing scans, etc. If the local index is hidden inside the data 
> regions, there has to be a different mechanism to access the data through the 
> data region. 
> With the above considerations, we imagine three possible implementation for 
> the local index solution, each detailed below. 
> APPROACH 1:  Current approach
> (1) Current approach uses balancer as a soft guarantee. Because of this, in 
> some rare cases, colocation might not happen. 
> (2) The index and data regions do not share the same WALEdits. Meaning 
> consistency cannot be achieved. Also there are two WAL writes per write from 
> client. 
> (3) Regular Hbase client can be used to access index data since index is just 
> another table. 
> APPROACH 2: Shadow regions + shared WAL & MVCC 
> (1) Introduce a shadow regions concept in HBase. Shadow regions are not 
> assigned by AM. Phoenix implements atomic open (and split/merge) of region 
> opening for data regions and index regions so that hard co-location is 
> guaranteed. 
> (2) For consistency requirements, the index 

[jira] [Updated] (PHOENIX-2454) Upsert with Double.NaN returns NumberFormatException

2015-11-25 Thread alex kamil (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

alex kamil updated PHOENIX-2454:

Description: 
When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)

test case:
{quote}
import java.sql.*;
public static void main(String [] args){
  try {
  Connection phoenixConnection = 
DriverManager.getConnection("jdbc:phoenix:localhost");
  String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
col1 double, col2 double)";
  Statement stmt = phoenixConnection.createStatement();
  stmt.executeUpdate(sql);
  phoenixConnection.commit();
  
  sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
  PreparedStatement ps = phoenixConnection.prepareStatement(sql);
  ps.setInt(1, 12);
  ps.setDouble(2, 2.5);
  ps.setDouble(3, Double.NaN);
  ps.executeUpdate();
  phoenixConnection.commit();
  phoenixConnection.close();
  } catch (Exception e) {
 e.printStackTrace();
  }
}
{quote}

  was:When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)


> Upsert with Double.NaN returns NumberFormatException
> 
>
> Key: PHOENIX-2454
> URL: https://issues.apache.org/jira/browse/PHOENIX-2454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: alex kamil
>Priority: Minor
>
> When saving Double.NaN via prepared statement into column of type Double 
> getting NumberFormatException (while expected behavior is saving null)
> test case:
> {quote}
> import java.sql.*;
> public static void main(String [] args){
>   try {
> Connection phoenixConnection = 
> DriverManager.getConnection("jdbc:phoenix:localhost");
> String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
> col1 double, col2 double)";
> Statement stmt = phoenixConnection.createStatement();
> stmt.executeUpdate(sql);
> phoenixConnection.commit();
> 
> sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
> PreparedStatement ps = phoenixConnection.prepareStatement(sql);
> ps.setInt(1, 12);
> ps.setDouble(2, 2.5);
> ps.setDouble(3, Double.NaN);
> ps.executeUpdate();
> phoenixConnection.commit();
> phoenixConnection.close();
> } catch (Exception e) {
>e.printStackTrace();
> }
> }
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2454) Upsert with Double.NaN returns NumberFormatException

2015-11-25 Thread alex kamil (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

alex kamil updated PHOENIX-2454:

Description: 
When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)

test case:

import java.sql.*;
public static void main(String [] args){
  try {
  Connection phoenixConnection = 
DriverManager.getConnection("jdbc:phoenix:localhost");
  String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
col1 double, col2 double)";
  Statement stmt = phoenixConnection.createStatement();
  stmt.executeUpdate(sql);
  phoenixConnection.commit();
  
  sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
  PreparedStatement ps = phoenixConnection.prepareStatement(sql);
  ps.setInt(1, 12);
  ps.setDouble(2, 2.5);
  ps.setDouble(3, Double.NaN);
  ps.executeUpdate();
  phoenixConnection.commit();
  phoenixConnection.close();
  } catch (Exception e) {
 e.printStackTrace();
  }
}


  was:
When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)

test case:
{quote}
import java.sql.*;
public static void main(String [] args){
  try {
  Connection phoenixConnection = 
DriverManager.getConnection("jdbc:phoenix:localhost");
  String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
col1 double, col2 double)";
  Statement stmt = phoenixConnection.createStatement();
  stmt.executeUpdate(sql);
  phoenixConnection.commit();
  
  sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
  PreparedStatement ps = phoenixConnection.prepareStatement(sql);
  ps.setInt(1, 12);
  ps.setDouble(2, 2.5);
  ps.setDouble(3, Double.NaN);
  ps.executeUpdate();
  phoenixConnection.commit();
  phoenixConnection.close();
  } catch (Exception e) {
 e.printStackTrace();
  }
}
{quote}


> Upsert with Double.NaN returns NumberFormatException
> 
>
> Key: PHOENIX-2454
> URL: https://issues.apache.org/jira/browse/PHOENIX-2454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: alex kamil
>Priority: Minor
>
> When saving Double.NaN via prepared statement into column of type Double 
> getting NumberFormatException (while expected behavior is saving null)
> test case:
> import java.sql.*;
> public static void main(String [] args){
>   try {
> Connection phoenixConnection = 
> DriverManager.getConnection("jdbc:phoenix:localhost");
> String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
> col1 double, col2 double)";
> Statement stmt = phoenixConnection.createStatement();
> stmt.executeUpdate(sql);
> phoenixConnection.commit();
> 
> sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
> PreparedStatement ps = phoenixConnection.prepareStatement(sql);
> ps.setInt(1, 12);
> ps.setDouble(2, 2.5);
> ps.setDouble(3, Double.NaN);
> ps.executeUpdate();
> phoenixConnection.commit();
> phoenixConnection.close();
> } catch (Exception e) {
>e.printStackTrace();
> }
> }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2456) StaleRegionBoundaryCacheException on query with stats

2015-11-25 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2456:
---

 Summary: StaleRegionBoundaryCacheException on query with stats
 Key: PHOENIX-2456
 URL: https://issues.apache.org/jira/browse/PHOENIX-2456
 Project: Phoenix
  Issue Type: Bug
Reporter: Mujtaba Chohan
Priority: Minor


{code}org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.{code}

Got this exception after data load and is persistent even after client restart 
and no split activity on server. However query works fine after stats table is 
truncated.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2458) Enable transactional tests MutableIndexFailureIT

2015-11-25 Thread Thomas D'Silva (JIRA)
Thomas D'Silva created PHOENIX-2458:
---

 Summary: Enable transactional tests MutableIndexFailureIT
 Key: PHOENIX-2458
 URL: https://issues.apache.org/jira/browse/PHOENIX-2458
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Thomas D'Silva
Assignee: Thomas D'Silva


Figure out why the transctional tests in MutableIndexFailureIT cause issues 
with zookeeper which cause other tests to fail.

Caused by: org.apache.hadoop.hbase.MasterNotRunningException: 
org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
at 
org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: Can't get 
connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
at 
org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for /hbase
at 
org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2458) Fix transactional tests in MutableIndexFailureIT

2015-11-25 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2458:

Summary: Fix transactional tests in MutableIndexFailureIT  (was: Enable 
transactional tests MutableIndexFailureIT)

> Fix transactional tests in MutableIndexFailureIT
> 
>
> Key: PHOENIX-2458
> URL: https://issues.apache.org/jira/browse/PHOENIX-2458
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
>
> Figure out why the transctional tests in MutableIndexFailureIT cause issues 
> with zookeeper which cause other tests to fail.
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: 
> org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
> ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: Can't get 
> connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2457) Difference in base vs immutable index row count when client is killed

2015-11-25 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2457:
---

 Summary: Difference in base vs immutable index row count when 
client is killed
 Key: PHOENIX-2457
 URL: https://issues.apache.org/jira/browse/PHOENIX-2457
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Mujtaba Chohan
Priority: Minor


Corner case. With immutable index created before data load, if client is killed 
during upsert then there is difference in row count for index table vs base 
table. This is a good case to test with transaction which should prevent this 
from happening.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2455) Partial results for a query when PHOENIX-2194 is applied

2015-11-25 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-2455:
--

 Summary: Partial results for a query when PHOENIX-2194 is applied
 Key: PHOENIX-2455
 URL: https://issues.apache.org/jira/browse/PHOENIX-2455
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal
Assignee: James Taylor


Hi [~giacomotaylor],

can you please look into the below test case, why it is failing after applying 
PHOENIX-2194

{code}
drop table test;
CREATE TABLE IF NOT EXISTS Test
 (col1 VARCHAR,  
 col2 VARCHAR,  
 col3 VARCHAR,  
 col4 UNSIGNED_LONG NOT NULL,   
 CONSTRAINT pk  
 PRIMARY KEY (col1,col2,col3,col4));

upsert into test values('a','b','',1);
upsert into test values('e.f','b','',1);
upsert into test values('f.g','b','',1);
upsert into test values('g.h','b','',1);
upsert into test values('f','b','',1);
upsert into test values('h.e','b','',1);


SELECT  col1, col2, col3, col4 FROM test WHERE (col1 IN ('a','e','f','g','h')) 
AND col2 = 'b' AND col4 >= 0 AND col4 < 2 ;

{code}

expected(AND getting without PHOENIX-2194);- 

|   COL1   |   COL2 
  |   COL3   |   COL4   
|
| a| b  
  |  | 1
|
| f| b  
  |  | 1
|

Getting with PHOENIX-2194(Which is partial).

|   COL1   |   COL2 
  |   COL3   |   COL4   
|
| a| b  
  |  | 1
|





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2441) Do not require transaction manager to be run

2015-11-25 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2441.
-
Resolution: Fixed

> Do not require transaction manager to be run
> 
>
> Key: PHOENIX-2441
> URL: https://issues.apache.org/jira/browse/PHOENIX-2441
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-2441-v2.patch, PHOENIX-2441-v3.patch, 
> PHOENIX-2441.patch, PHOENIX-2441_addendum1.patch, PHOENIX-2441_addendum2.patch
>
>
> For users not using transactional tables, we should not require that a 
> transaction manager be running. The TransactionServiceClient is currently 
> always initialized when an HConnection is created. To make this conditional, 
> we can create a new {{phoenix.transactions.enable}} config property that if 
> false will prevent the TransactionServiceClient from being started (as well 
> as prevent transactional tables from being created).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2450) Cleanup API for determining if non transactional mutable secondary index configured properly

2015-11-25 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2450.
-
Resolution: Fixed

> Cleanup API for determining if non transactional mutable secondary index 
> configured properly
> 
>
> Key: PHOENIX-2450
> URL: https://issues.apache.org/jira/browse/PHOENIX-2450
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-2450_v1.patch, PHOENIX-2450_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2454) Upsert with Double.NaN returns NumberFormatException

2015-11-25 Thread alex kamil (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

alex kamil updated PHOENIX-2454:

Description: 
When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)

test case:

{code}
import java.sql.*;
public static void main(String [] args){
  try {
  Connection phoenixConnection = 
DriverManager.getConnection("jdbc:phoenix:localhost");
  String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
col1 double, col2 double)";
  Statement stmt = phoenixConnection.createStatement();
  stmt.executeUpdate(sql);
  phoenixConnection.commit();
  
  sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
  PreparedStatement ps = phoenixConnection.prepareStatement(sql);
  ps.setInt(1, 12);
  ps.setDouble(2, 2.5);
  ps.setDouble(3, Double.NaN);
  ps.executeUpdate();
  phoenixConnection.commit();
  phoenixConnection.close();
  } catch (Exception e) {
 e.printStackTrace();
  }
}
{code}


  was:
When saving Double.NaN via prepared statement into column of type Double 
getting NumberFormatException (while expected behavior is saving null)

test case:

import java.sql.*;
public static void main(String [] args){
  try {
  Connection phoenixConnection = 
DriverManager.getConnection("jdbc:phoenix:localhost");
  String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
col1 double, col2 double)";
  Statement stmt = phoenixConnection.createStatement();
  stmt.executeUpdate(sql);
  phoenixConnection.commit();
  
  sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
  PreparedStatement ps = phoenixConnection.prepareStatement(sql);
  ps.setInt(1, 12);
  ps.setDouble(2, 2.5);
  ps.setDouble(3, Double.NaN);
  ps.executeUpdate();
  phoenixConnection.commit();
  phoenixConnection.close();
  } catch (Exception e) {
 e.printStackTrace();
  }
}



> Upsert with Double.NaN returns NumberFormatException
> 
>
> Key: PHOENIX-2454
> URL: https://issues.apache.org/jira/browse/PHOENIX-2454
> Project: Phoenix
>  Issue Type: Bug
>Reporter: alex kamil
>Priority: Minor
>
> When saving Double.NaN via prepared statement into column of type Double 
> getting NumberFormatException (while expected behavior is saving null)
> test case:
> {code}
> import java.sql.*;
> public static void main(String [] args){
>   try {
> Connection phoenixConnection = 
> DriverManager.getConnection("jdbc:phoenix:localhost");
> String sql  = "CREATE TABLE test25 (id BIGINT not null primary key,  
> col1 double, col2 double)";
> Statement stmt = phoenixConnection.createStatement();
> stmt.executeUpdate(sql);
> phoenixConnection.commit();
> 
> sql = "UPSERT INTO test25 (id, col1,col2) VALUES (?,?,?)";
> PreparedStatement ps = phoenixConnection.prepareStatement(sql);
> ps.setInt(1, 12);
> ps.setDouble(2, 2.5);
> ps.setDouble(3, Double.NaN);
> ps.executeUpdate();
> phoenixConnection.commit();
> phoenixConnection.close();
> } catch (Exception e) {
>e.printStackTrace();
> }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2458) Fix transactional tests in MutableIndexFailureIT

2015-11-25 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2458:

Attachment: PHOENIX-2458.patch

> Fix transactional tests in MutableIndexFailureIT
> 
>
> Key: PHOENIX-2458
> URL: https://issues.apache.org/jira/browse/PHOENIX-2458
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-2458.patch
>
>
> Figure out why the transctional tests in MutableIndexFailureIT cause issues 
> with zookeeper which cause other tests to fail.
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: 
> org.apache.hadoop.hbase.MasterNotRunningException: Can't get connection to 
> ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.hadoop.hbase.MasterNotRunningException: Can't get 
> connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for /hbase
> at 
> org.apache.phoenix.end2end.CountDistinctCompressionIT.doSetup(CountDistinctCompressionIT.java:48)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2449) QueryServer needs Hadoop configuration on classpath with Kerberos

2015-11-25 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15027325#comment-15027325
 ] 

Enis Soztutar commented on PHOENIX-2449:


+1. Thanks Josh. 

> QueryServer needs Hadoop configuration on classpath with Kerberos
> -
>
> Key: PHOENIX-2449
> URL: https://issues.apache.org/jira/browse/PHOENIX-2449
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2449.2.patch, PHOENIX-2449.patch
>
>
> [~cartershanklin] pointed out to me that PQS fails to perform Kerberos login 
> using queryserver.py out of the box. It looks like this is ultimately because 
> the login is dependent on the value for {{hadoop.security.authentication}}. 
> Thus, without those configs, PQS never logs in, and just spews errors because 
> the HBase RPC fails without the ticket.
> Should be simple enough to pull HADOOP_CONF_DIR out and include it on the 
> classpath.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2441) Do not require transaction manager to be run

2015-11-25 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026421#comment-15026421
 ] 

Hudson commented on PHOENIX-2441:
-

FAILURE: Integrated in Phoenix-master #981 (See 
[https://builds.apache.org/job/Phoenix-master/981/])
PHOENIX-2441 Addendum to not require transaction manager to be run (jtaylor: 
rev 1b5e7fd71f3709f760b008f7e458c124c2868594)
* phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java


> Do not require transaction manager to be run
> 
>
> Key: PHOENIX-2441
> URL: https://issues.apache.org/jira/browse/PHOENIX-2441
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Attachments: PHOENIX-2441-v2.patch, PHOENIX-2441-v3.patch, 
> PHOENIX-2441.patch, PHOENIX-2441_addendum1.patch, PHOENIX-2441_addendum2.patch
>
>
> For users not using transactional tables, we should not require that a 
> transaction manager be running. The TransactionServiceClient is currently 
> always initialized when an HConnection is created. To make this conditional, 
> we can create a new {{phoenix.transactions.enable}} config property that if 
> false will prevent the TransactionServiceClient from being started (as well 
> as prevent transactional tables from being created).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2452) Error: Does not support non-standard or non-equi correlated-subquery conditions.

2015-11-25 Thread Suhas Nalapure (JIRA)
Suhas Nalapure created PHOENIX-2452:
---

 Summary: Error: Does not support non-standard or non-equi 
correlated-subquery conditions.
 Key: PHOENIX-2452
 URL: https://issues.apache.org/jira/browse/PHOENIX-2452
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Suhas Nalapure


java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
non-equi correlated-subquery conditions.

Steps to re-produce:
--
0: jdbc:phoenix:localhost:2181> create table temp (id bigint not null primary 
key, cumsum bigint );
No rows affected (0.74 seconds)
0: jdbc:phoenix:localhost:2181> upsert into temp values(1,5);
1 row affected (0.102 seconds)
0: jdbc:phoenix:localhost:2181> upsert into temp values(3, 10);
1 row affected (0.016 seconds)
0: jdbc:phoenix:localhost:2181> upsert into temp values(6, 12);
1 row affected (0.003 seconds)
0: jdbc:phoenix:localhost:2181> upsert into temp values(7, 17);
1 row affected (0.008 seconds)
0: jdbc:phoenix:localhost:2181> upsert into temp values(10, 19);
1 row affected (0.011 seconds)
0: jdbc:phoenix:localhost:2181> select t1.id, t2.id, t1.cumsum, t2.cumsum, 
t1.cumsum - t2.cumsum from temp t1, temp t2 where t2.id = (select max(id) from 
temp where id < t1.id) and t1.cumsum > t2.cumsum ;
Error: Does not support non-standard or non-equi correlated-subquery 
conditions. (state=,code=0)
java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
non-equi correlated-subquery conditions.
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.leaveBooleanNode(SubqueryRewriter.java:479)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:499)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:405)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:207)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:70)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:48)
at 
org.apache.phoenix.compile.SubqueryRewriter.transform(SubqueryRewriter.java:84)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:375)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:354)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:260)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:255)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:254)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1382)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2299) Support CURRENT_DATE() in Pherf data upserts

2015-11-25 Thread Karan Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karan Singhal updated PHOENIX-2299:
---
Attachment: 0001-PHOENIX-2299-Support-CURRENT_DATE-in-Pherf-data-upse.patch

Added a useCurrentDate tag to support current date feature.

> Support CURRENT_DATE() in Pherf data upserts
> 
>
> Key: PHOENIX-2299
> URL: https://issues.apache.org/jira/browse/PHOENIX-2299
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.6.0
>Reporter: James Taylor
>Assignee: Karan Singhal
> Fix For: 4.6.0, 4.7.0
>
> Attachments: 
> 0001-PHOENIX-2299-Support-CURRENT_DATE-in-Pherf-data-upse.patch
>
>
> Just replace the actual date with "NOW" in the xml. Then check the string for 
> that value in the generator. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2453) Make prepared statement creation delegate to org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String)

2015-11-25 Thread Clement Escoffier (JIRA)
Clement Escoffier created PHOENIX-2453:
--

 Summary: Make prepared statement creation delegate to 
org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String)
 Key: PHOENIX-2453
 URL: https://issues.apache.org/jira/browse/PHOENIX-2453
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.5.2
Reporter: Clement Escoffier


This issue is about making the prepare statement creation methods delegate to 
org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String) in 
order to avoid exception in applications and clients using this method.

For the context, the issue was raised by a user of vert.x (http://vertx.io). 
This user tries to connect the vertx-jdbc-client to phoenix. A vert.x 
application cannot use directly the JDBC driver because it promotes an 
asynchronous and non-blocking development model (while jdbc interactions are 
blocking). The vertx-jdbc-client is using ` `prepareStatement(String sql, int 
autoGeneratedKeys)` which throws an exception. 

A solution would be to delegate the method to the "simplest" version (just 
"sql") and ignore the effect of the passed parameter.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2453) Make prepared statement creation delegate to org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String)

2015-11-25 Thread Clement Escoffier (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Clement Escoffier updated PHOENIX-2453:
---
Attachment: PHOENIX-2453.patch

> Make prepared statement creation delegate to 
> org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String)
> -
>
> Key: PHOENIX-2453
> URL: https://issues.apache.org/jira/browse/PHOENIX-2453
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.2
>Reporter: Clement Escoffier
> Attachments: PHOENIX-2453.patch
>
>
> This issue is about making the prepare statement creation methods delegate to 
> org.apache.phoenix.jdbc.PhoenixConnection#prepareStatement(java.lang.String) 
> in order to avoid exception in applications and clients using this method.
> For the context, the issue was raised by a user of vert.x (http://vertx.io). 
> This user tries to connect the vertx-jdbc-client to phoenix. A vert.x 
> application cannot use directly the JDBC driver because it promotes an 
> asynchronous and non-blocking development model (while jdbc interactions are 
> blocking). The vertx-jdbc-client is using ` `prepareStatement(String sql, int 
> autoGeneratedKeys)` which throws an exception. 
> A solution would be to delegate the method to the "simplest" version (just 
> "sql") and ignore the effect of the passed parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2299) Support CURRENT_DATE() in Pherf data upserts

2015-11-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15026717#comment-15026717
 ] 

Hadoop QA commented on PHOENIX-2299:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12774326/0001-PHOENIX-2299-Support-CURRENT_DATE-in-Pherf-data-upse.patch
  against master branch at commit 1b5e7fd71f3709f760b008f7e458c124c2868594.
  ATTACHMENT ID: 12774326

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 10 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
32 warning messages.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   String timeStamp = new SimpleDateFormat("-MM-dd 
HH:mm:ss.SSS z").format(Calendar.getInstance().getTime());
+   String timeStamp1 = new SimpleDateFormat("-MM-dd 
HH:mm:ss.SSS z").format(Calendar.getInstance().getTime());
+String timeStamp2 = new SimpleDateFormat("-MM-dd 
HH:mm:ss.SSS z").format(Calendar.getInstance().getTime());
+if ((dataMapping.getType() == DataTypeMapping.DATE) && 
(dataMapping.getUseCurrentDate() == true)) {
+   String timeStamp1 = new SimpleDateFormat("-MM-dd 
HH:mm:ss.SSS z").format(Calendar.getInstance().getTime());
+assertNotNull("Could not retrieve a value in DataValue for 
random DATE.", value.getValue());
+String timeStamp2 = new SimpleDateFormat("-MM-dd 
HH:mm:ss.SSS z").format(Calendar.getInstance().getTime());

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TenantSpecificTablesDMLIT

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.activemq.bugs.TransactedStoreUsageSuspendResumeTest.testTransactedStoreUsageSuspendResume(TransactedStoreUsageSuspendResumeTest.java:144)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/185//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/185//artifact/patchprocess/patchReleaseAuditWarnings.txt
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/185//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/185//console

This message is automatically generated.

> Support CURRENT_DATE() in Pherf data upserts
> 
>
> Key: PHOENIX-2299
> URL: https://issues.apache.org/jira/browse/PHOENIX-2299
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.6.0
>Reporter: James Taylor
>Assignee: Karan Singhal
>  Labels: performance
> Fix For: 4.6.0, 4.7.0
>
> Attachments: 
> 0001-PHOENIX-2299-Support-CURRENT_DATE-in-Pherf-data-upse.patch
>
>
> Just replace the actual date with "NOW" in the xml. Then check the string for 
> that value in the generator. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)