[jira] [Commented] (PHOENIX-1072) Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' started yet

2014-07-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059039#comment-14059039
 ] 

James Taylor commented on PHOENIX-1072:
---

One question on this, though: if the client can't reach the master, does the 
error message still show up (as it appears you're just killing the client)? 
It'd be nice if hbase had the ability to specify retry info when making a 
connection, as a parameter rather than just globally through the Config. If you 
agree, maybe you could file a JIRA?

 Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' 
 started yet 
 ---

 Key: PHOENIX-1072
 URL: https://issues.apache.org/jira/browse/PHOENIX-1072
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: phoenix-1072-v1.patch, phoenix-1072.patch


 Currently sqlline.py will retry 35 times to talk to HBase master when the 
 passed in quorum string is wrong or the underlying HBase isn't running. 
 In that situation, Sqlline will stuck there forever. The JIRA is aiming to 
 fast fail sqlline.py.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1002) Add support for % operator

2014-07-11 Thread Kyle Buzsaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Buzsaki updated PHOENIX-1002:
--

Attachment: PHOENIX-1002_4.patch

Thanks for the detailed explanation, I appreciate the insight :)

Attached is a more-or-less final version of the patch. I've converted the 
ModulusFunction into ModulusExpression, matching the naming scheme of the other 
operators. I've also updated the modulus function unit tests to check the 
modulus operator instead. Finally, I've reordered and squashed several of the 
commits for better readability.

As before, this patch file is relative my patch fixing operator precedence 
issues at PHOENIX-1075

 Add support for % operator
 --

 Key: PHOENIX-1002
 URL: https://issues.apache.org/jira/browse/PHOENIX-1002
 Project: Phoenix
  Issue Type: New Feature
Reporter: Thomas D'Silva
 Attachments: PHOENIX-1002.patch, PHOENIX-1002_2.patch, 
 PHOENIX-1002_3.patch, PHOENIX-1002_4.patch


 Supporting the % operator would allow using sequences to generate IDs that 
 are less than a maximum number. 
 CREATE SEQUENCE foo.bar
 SELECT ((NEXT VALUE FOR foo.bar)%1000)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834867
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/CreateIndexCompiler.java 
---
@@ -47,6 +51,21 @@ public MutationPlan compile(final CreateIndexStatement 
create) throws SQLExcepti
 final StatementContext context = new StatementContext(statement, 
resolver, scan);
 ExpressionCompiler expressionCompiler = new 
ExpressionCompiler(context);
 ListParseNode splitNodes = create.getSplitNodes();
+if (create.getIndexType() == IndexType.LOCAL) {
+if (!splitNodes.isEmpty()) {
+throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.CANNOT_SPLIT_LOCAL_INDEX)
+.build().buildException();
+} 
+if (create.getProps() != null  create.getProps().get() != 
null) {
+ListPairString, Object list = 
create.getProps().get();
--- End diff --

corrected.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834896
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java 
---
@@ -282,7 +286,7 @@ public Expression visitLeave(FunctionParseNode node, 
ListExpression children)
 children = node.validate(children, context);
 Expression expression = node.create(children, context);
 ImmutableBytesWritable ptr = context.getTempPtr();
-if (node.isStateless()) {
+if (node.isStateless()  expression.isDeterministic()) {
--- End diff --

Yes James. This change already there in master branch


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-933) Local index support to Phoenix

2014-07-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059119#comment-14059119
 ] 

ASF GitHub Bot commented on PHOENIX-933:


Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834905
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/ExpressionCompiler.java 
---
@@ -320,7 +322,21 @@ public Expression visitLeave(FunctionParseNode node, 
ListExpression children)
  * @throws SQLException if the column expression node does not refer 
to a known/unambiguous column
  */
 protected ColumnRef resolveColumn(ColumnParseNode node) throws 
SQLException {
-ColumnRef ref = 
context.getResolver().resolveColumn(node.getSchemaName(), node.getTableName(), 
node.getName());
+ColumnRef ref = null;
+try {
+ref = 
context.getResolver().resolveColumn(node.getSchemaName(), node.getTableName(), 
node.getName());
+} catch (ColumnNotFoundException e) {
+// If local index table (need to test join case here)
--- End diff --

Added the comment.


 Local index support to Phoenix
 --

 Key: PHOENIX-933
 URL: https://issues.apache.org/jira/browse/PHOENIX-933
 Project: Phoenix
  Issue Type: New Feature
Reporter: rajeshbabu

 Hindex(https://github.com/Huawei-Hadoop/hindex) provides local indexing 
 support to HBase. It stores region level index in a separate table, and 
 co-locates the user and index table regions with a custom load balancer.
 See http://goo.gl/phkhwC and http://goo.gl/EswlxC for more information. 
 This JIRA addresses the local indexing solution integration to phoenix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14834954
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/IndexStatementRewriter.java
 ---
@@ -96,6 +96,12 @@ public ParseNode visit(ColumnParseNode node) throws 
SQLException {
 
 String indexColName = IndexUtil.getIndexColumnName(dataCol);
 // Same alias as before, but use the index column name instead of 
the data column name
+// TODO: add dataColRef as an alternate ColumnParseNode in the 
case that the index
--- End diff --

removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-933) Local index support to Phoenix

2014-07-11 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059126#comment-14059126
 ] 

ASF GitHub Bot commented on PHOENIX-933:


Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14835161
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/compile/TrackOrderPreservingExpressionCompiler.java
 ---
@@ -69,6 +70,7 @@
 boolean isSharedViewIndex = table.getViewIndexId() != null;
 // TODO: util for this offset, as it's computed in numerous places
 positionOffset = (isSalted ? 1 : 0) + (isMultiTenant ? 1 : 0) + 
(isSharedViewIndex ? 1 : 0);
+this.isOrderPreserving = table.getIndexType() != IndexType.LOCAL;
--- End diff --

Thanks for pointing this James. Yes merge sort is fine. Done the changes 
locally.


 Local index support to Phoenix
 --

 Key: PHOENIX-933
 URL: https://issues.apache.org/jira/browse/PHOENIX-933
 Project: Phoenix
  Issue Type: New Feature
Reporter: rajeshbabu

 Hindex(https://github.com/Huawei-Hadoop/hindex) provides local indexing 
 support to HBase. It stores region level index in a separate table, and 
 co-locates the user and index table regions with a custom load balancer.
 See http://goo.gl/phkhwC and http://goo.gl/EswlxC for more information. 
 This JIRA addresses the local indexing solution integration to phoenix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1002) Add support for % operator

2014-07-11 Thread Kyle Buzsaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Buzsaki updated PHOENIX-1002:
--

Attachment: PHOENIX-1002_5.patch

Looks like the division expression just lets the underlying ArithmeticException 
bubble up. I've removed my check for a 0 divisor to be consistent with that and 
revised the integration test accordingly. I've also switched the 
ptr.getLength() == 0 checks to return true as you advised.

For the + and - operator precedence testing, there's already a test that does 
just that in my operator precedence patch at PHOENIX-1075 . I've been keeping 
the two separate for now and only uploading the patches for this issue relative 
to that patch rather than relative to master. The operator precedence patch 
will have to be merged in first, or I can include the relevant precedence 
fixing commits in this patch file instead.

 Add support for % operator
 --

 Key: PHOENIX-1002
 URL: https://issues.apache.org/jira/browse/PHOENIX-1002
 Project: Phoenix
  Issue Type: New Feature
Reporter: Thomas D'Silva
 Attachments: PHOENIX-1002.patch, PHOENIX-1002_2.patch, 
 PHOENIX-1002_3.patch, PHOENIX-1002_4.patch, PHOENIX-1002_5.patch


 Supporting the % operator would allow using sequences to generate IDs that 
 are less than a maximum number. 
 CREATE SEQUENCE foo.bar
 SELECT ((NEXT VALUE FOR foo.bar)%1000)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1072) Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' started yet

2014-07-11 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059177#comment-14059177
 ] 

Jeffrey Zhong commented on PHOENIX-1072:


{quote}
if the client can't reach the master, does the error message still show up
{quote}
Yes, error message will still show up like norm. The child process will be 
killed only when parent process(sqlline.py) dies by ctrl-c in all other cases 
there is no action. This issue before is that even when sqlline can't connect 
to Phoenix and sqlline.py is killed by ctrl-c, the sqlline java child process 
still runs for quite a while and have to manually kill it.

{quote}
It'd be nice if hbase had the ability to specify retry info when making a 
connection, as a parameter rather than just globally through the Config. If you 
agree, maybe you could file a JIRA
{quote}
Do you mena a HBase JIRA? Currently we could so similar things by passing a 
temporary conf setting  restore it back(not elegant though) but there are 
issues:
1) the created connection(with different retries) might be cached by HBase and 
reused later
2) test cases needs long wait for underlying hbase test cluster is ready. 

There are some issues inside HBase getMaster which should do smart retries and 
fail fast when zookeeper isn't available or parent znode is wrong etc. Let me 
file a bug on that. 



  





 Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' 
 started yet 
 ---

 Key: PHOENIX-1072
 URL: https://issues.apache.org/jira/browse/PHOENIX-1072
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: phoenix-1072-v1.patch, phoenix-1072.patch


 Currently sqlline.py will retry 35 times to talk to HBase master when the 
 passed in quorum string is wrong or the underlying HBase isn't running. 
 In that situation, Sqlline will stuck there forever. The JIRA is aiming to 
 fast fail sqlline.py.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1002) Add support for % operator

2014-07-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059248#comment-14059248
 ] 

James Taylor commented on PHOENIX-1002:
---

+1. Nice work, [~kbuzsaki]. Would it be possible for someone to commit this 
please? [~anoop.hbase], [~stoens], [~ramkrishna]? This needs to go in *after* 
PHOENIX-1075.

 Add support for % operator
 --

 Key: PHOENIX-1002
 URL: https://issues.apache.org/jira/browse/PHOENIX-1002
 Project: Phoenix
  Issue Type: New Feature
Reporter: Thomas D'Silva
 Attachments: PHOENIX-1002.patch, PHOENIX-1002_2.patch, 
 PHOENIX-1002_3.patch, PHOENIX-1002_4.patch, PHOENIX-1002_5.patch


 Supporting the % operator would allow using sequences to generate IDs that 
 are less than a maximum number. 
 CREATE SEQUENCE foo.bar
 SELECT ((NEXT VALUE FOR foo.bar)%1000)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1072) Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' started yet

2014-07-11 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059251#comment-14059251
 ] 

James Taylor commented on PHOENIX-1072:
---

Yes, an HBase JIRA. You'd know better than I what the right solution is, but it 
just seems to be more difficult than it needs to be to do what we want to do 
(which is pretty basic). 

 Fast fail sqlline.py when pass wrong quorum string or hbase cluster hasnt' 
 started yet 
 ---

 Key: PHOENIX-1072
 URL: https://issues.apache.org/jira/browse/PHOENIX-1072
 Project: Phoenix
  Issue Type: Improvement
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: phoenix-1072-v1.patch, phoenix-1072.patch


 Currently sqlline.py will retry 35 times to talk to HBase master when the 
 passed in quorum string is wrong or the underlying HBase isn't running. 
 In that situation, Sqlline will stuck there forever. The JIRA is aiming to 
 fast fail sqlline.py.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1077) IN list of row value constructors doesn't work for tenant specific views

2014-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059340#comment-14059340
 ] 

Samarth Jain commented on PHOENIX-1077:
---

{code}
I did some more digging on this and found few more failure scenarios:

1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 

Details:

Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
SALT_BUCKETS=4

Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, ?, 
?))

Exception:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)

2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN list 
of RVCs. Result - Fails with exception.  

Details:
Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
(tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true

Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t

Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
pk3) IN ((?, ?, ?), (?, ?, ?))

Stacktrace:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)

3) Table type - Multitenant and salted. Query - IN list of RVCs. Result - All 
rows not returned.

Base table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
(tenantId,pk2,pk3)) MULTI_TENANT=true, SALT_BUCKETS=4

Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t

Upserts:
upsert into t_view (pk2, pk3, c1) values ('helo1', 1, 1)
upsert into t_view (pk2, pk3, c1) values ('helo2', 2, 2)
upsert into t_view (pk2, pk3, c1) values ('helo3', 3, 3)
upsert into t_view (pk2, pk3, c1) values ('helo4', 4, 4)
upsert into t_view (pk2, pk3, c1) values ('helo5', 5, 5)

Query using tenant specific connection - select pk2, pk3 from t_view WHERE 
(pk2, pk3) IN ( ('helo3',  3),  ('helo5',  5) ) ORDER BY pk2

Result - Only one row returned - helo3, 3 

This has likely to do with salting because on removing SALT_BUCKETS=4 from the 
base table DDL all the expected rows are returned.

4) The one Eli pointed above:

CREATE TABLE in_test ( user VARCHAR, tenant_id VARCHAR(5) NOT 
NULL,tenant_type_id VARCHAR(3) NOT NULL,  id INTEGER NOT NULL CONSTRAINT pk 
PRIMARY KEY (tenant_id, tenant_type_id, id))

upsert into in_test 

[jira] [Updated] (PHOENIX-1077) Bugs when executing an IN list of Row Value Constructors.

2014-07-11 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1077:
--

Assignee: James Taylor  (was: Eli Levine)
 Summary: Bugs when executing an IN list of Row Value Constructors.  (was: 
IN list of row value constructors doesn't work for tenant specific views)

 Bugs when executing an IN list of Row Value Constructors.
 -

 Key: PHOENIX-1077
 URL: https://issues.apache.org/jira/browse/PHOENIX-1077
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Samarth Jain
Assignee: James Taylor

 IN list of row value constructors doesn't work when queried against tenant 
 views for multi-tenant phoenix tables. Consider this test (added in 
 TenantSpecificTablesDMLIT.java)
 {code}
 public void testRVCOnTenantSpecificTable() throws Exception {
 Connection conn = nextConnection(PHOENIX_JDBC_TENANT_SPECIFIC_URL);
 try {
 conn.setAutoCommit(true);
 conn.createStatement().executeUpdate(upsert into  + 
 TENANT_TABLE_NAME +  (id, user) values (1, 'BonA'));
 conn.createStatement().executeUpdate(upsert into  + 
 TENANT_TABLE_NAME +  (id, user) values (2, 'BonB'));
 conn.createStatement().executeUpdate(upsert into  + 
 TENANT_TABLE_NAME +  (id, user) values (3, 'BonC'));
 conn.close();
 conn = nextConnection(PHOENIX_JDBC_TENANT_SPECIFIC_URL);
 PreparedStatement stmt = conn.prepareStatement(select id from  
 + TENANT_TABLE_NAME +  WHERE (id, user) IN ((?, ?), (?, ?), (?, ?)));
 stmt.setInt(1, 1);
 stmt.setString(2, BonA);
 stmt.setInt(3, 2);
 stmt.setString(4, BonB);
 stmt.setInt(5, 3);
 stmt.setString(6, BonC);
 ResultSet rs = stmt.executeQuery();
 assertTrue(rs.next());
 assertEquals(1, rs.getInt(1));
 assertTrue(rs.next());
 assertEquals(2, rs.getInt(1));
 assertTrue(rs.next());
 assertEquals(3, rs.getInt(1));
 assertFalse(rs.next());
 }
 finally {
 conn.close();
 }
 }
 {code}
 Replacing TENANT_TABLE_NAME with PARENT_TABLE_NAME (that is the base table), 
 the test works fine.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1081) CPU usage 100% With phoenix

2014-07-11 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059493#comment-14059493
 ] 

yang ming commented on PHOENIX-1081:


IPC Server handler 69 on 60020 daemon prio=10 tid=0x4def5000 
nid=0x31ed runnable [0x48c36000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.phoenix.filter.SkipScanFilter.navigate(SkipScanFilter.java:288)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:112)
at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354)
{color:red}
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390)
{color}
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4047)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4123)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3990)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3980)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:384)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:133)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:66)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1316)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:2588)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2556)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)

 CPU usage 100% With phoenix 
 

 Key: PHOENIX-1081
 URL: https://issues.apache.org/jira/browse/PHOENIX-1081
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: yang ming
Priority: Critical
 Attachments: JMX.jpg, jstat.jpg, the jstack of all threads, the 
 jstack of thread 12725.jpg, the jstack of thread 12748.jpg, the threads of 
 regionserver process.jpg


 The concurrent of the system is not high,but CPU usage often up to 100%.
 I had stopped the system,but regionserver's CPU usage is still high.
 what can case this problem?
 table row count:6000 million
 table ddl:
 create table if not exists summary
 (
 videoid integer not null,
 date date not null,
 platform varchar not null,
 device varchar not null,
 systemgroup varchar not null,
 system varchar not null,
 vv bigint,
 ts bigint,
 up bigint,
 down bigint,
 comment bigint,
 favori bigint,
 favord bigint,
 quote bigint,
 reply bigint
 constraint pk primary key (videoid, date,platform, device, systemgroup,system)
 )salt_buckets = 30,versions=1,compression='snappy';
 query 1:
 select sum(vv) as sumvv,sum(comment) as sumcomment,sum(up) as sumup,sum(down) 
 as sumdown,sum(reply) as sumreply,count(*) as count from summary(reply 
 bigint) where videoid 
 in(137102991,151113895,171559204,171559439,171573932,171573932,171573932,171574082,171574082,171574164,171677219,171794335,171902734,172364368,172475141,172700554,172700554,172700554,172716705,172784258,172835778,173112067,173165316,173165316,173379601,173448315,173503961,173692664,173911358,174077089,174099017,174349633,174349877,174651474,174651474,174759297,174883566,174883566,174987670,174987670,175131298)
  and date=to_date('2013-09-01','-MM-dd') and 
 date=to_date('2014-07-07','-MM-dd')



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1081) CPU usage 100% With phoenix

2014-07-11 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059499#comment-14059499
 ] 

yang ming commented on PHOENIX-1081:


Hbase version is 0.94.19.I had added some debug log,may be the endless loop is 
here.
   LOOP: while((kv = this.heap.peek()) != null) {
// Check that the heap gives us KVs in an increasing order.
assert prevKV == null || comparator == null || 
comparator.compare(prevKV, kv) = 0 :
  Key  + prevKV +  followed by a  + smaller key  + kv +  in cf  
+ store;
prevKV = kv;
{color:red}
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
{color}
switch(qcode) {
  case INCLUDE:
  case INCLUDE_AND_SEEK_NEXT_ROW:
  case INCLUDE_AND_SEEK_NEXT_COL:

Filter f = matcher.getFilter();
outResult.add(f == null ? kv : f.transform(kv));
count++;

 CPU usage 100% With phoenix 
 

 Key: PHOENIX-1081
 URL: https://issues.apache.org/jira/browse/PHOENIX-1081
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: yang ming
Priority: Critical
 Attachments: JMX.jpg, jstat.jpg, the jstack of all threads, the 
 jstack of thread 12725.jpg, the jstack of thread 12748.jpg, the threads of 
 regionserver process.jpg


 The concurrent of the system is not high,but CPU usage often up to 100%.
 I had stopped the system,but regionserver's CPU usage is still high.
 what can case this problem?
 table row count:6000 million
 table ddl:
 create table if not exists summary
 (
 videoid integer not null,
 date date not null,
 platform varchar not null,
 device varchar not null,
 systemgroup varchar not null,
 system varchar not null,
 vv bigint,
 ts bigint,
 up bigint,
 down bigint,
 comment bigint,
 favori bigint,
 favord bigint,
 quote bigint,
 reply bigint
 constraint pk primary key (videoid, date,platform, device, systemgroup,system)
 )salt_buckets = 30,versions=1,compression='snappy';
 query 1:
 select sum(vv) as sumvv,sum(comment) as sumcomment,sum(up) as sumup,sum(down) 
 as sumdown,sum(reply) as sumreply,count(*) as count from summary(reply 
 bigint) where videoid 
 in(137102991,151113895,171559204,171559439,171573932,171573932,171573932,171574082,171574082,171574164,171677219,171794335,171902734,172364368,172475141,172700554,172700554,172700554,172716705,172784258,172835778,173112067,173165316,173165316,173379601,173448315,173503961,173692664,173911358,174077089,174099017,174349633,174349877,174651474,174651474,174759297,174883566,174883566,174987670,174987670,175131298)
  and date=to_date('2013-09-01','-MM-dd') and 
 date=to_date('2014-07-07','-MM-dd')



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (PHOENIX-1081) CPU usage 100% With phoenix

2014-07-11 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059493#comment-14059493
 ] 

yang ming edited comment on PHOENIX-1081 at 7/11/14 11:37 PM:
--

IPC Server handler 69 on 60020 daemon prio=10 tid=0x4def5000 
nid=0x31ed runnable [0x48c36000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.phoenix.filter.SkipScanFilter.navigate(SkipScanFilter.java:288)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:112)
{color:blue}at 
org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354){color}
{color:red}at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390){color}
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4047)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4123)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3990)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3980)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:384)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:133)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:66)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1316)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:2588)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2556)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)


was (Author: yangming860101):
IPC Server handler 69 on 60020 daemon prio=10 tid=0x4def5000 
nid=0x31ed runnable [0x48c36000]
   java.lang.Thread.State: RUNNABLE
at 
org.apache.phoenix.filter.SkipScanFilter.navigate(SkipScanFilter.java:288)
at 
org.apache.phoenix.filter.SkipScanFilter.filterKeyValue(SkipScanFilter.java:112)
at 

{color:blue}org.apache.hadoop.hbase.regionserver.ScanQueryMatcher.match(ScanQueryMatcher.java:354){color}
{color:red}at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390){color}
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.next(KeyValueHeap.java:143)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.populateResult(HRegion.java:4047)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:4123)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3990)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:3980)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.scanUnordered(GroupedAggregateRegionObserver.java:384)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:133)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:66)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1316)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.internalOpenScanner(HRegionServer.java:2588)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2556)
at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
at 
org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1434)

 CPU usage 100% With phoenix 
 

 Key: PHOENIX-1081
 URL: https://issues.apache.org/jira/browse/PHOENIX-1081
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: yang ming

[jira] [Comment Edited] (PHOENIX-1081) CPU usage 100% With phoenix

2014-07-11 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059499#comment-14059499
 ] 

yang ming edited comment on PHOENIX-1081 at 7/11/14 11:37 PM:
--

Hbase version is 0.94.19.I had added some debug log,may be the endless loop is 
here.
{color:red}at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390){color}
   LOOP: while((kv = this.heap.peek()) != null) {
// Check that the heap gives us KVs in an increasing order.
assert prevKV == null || comparator == null || 
comparator.compare(prevKV, kv) = 0 :
  Key  + prevKV +  followed by a  + smaller key  + kv +  in cf  
+ store;
prevKV = kv;
{color:red}
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
{color}
switch(qcode) {
  case INCLUDE:
  case INCLUDE_AND_SEEK_NEXT_ROW:
  case INCLUDE_AND_SEEK_NEXT_COL:

Filter f = matcher.getFilter();
outResult.add(f == null ? kv : f.transform(kv));
count++;


was (Author: yangming860101):
Hbase version is 0.94.19.I had added some debug log,may be the endless loop is 
here.
   LOOP: while((kv = this.heap.peek()) != null) {
// Check that the heap gives us KVs in an increasing order.
assert prevKV == null || comparator == null || 
comparator.compare(prevKV, kv) = 0 :
  Key  + prevKV +  followed by a  + smaller key  + kv +  in cf  
+ store;
prevKV = kv;
{color:red}
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
{color}
switch(qcode) {
  case INCLUDE:
  case INCLUDE_AND_SEEK_NEXT_ROW:
  case INCLUDE_AND_SEEK_NEXT_COL:

Filter f = matcher.getFilter();
outResult.add(f == null ? kv : f.transform(kv));
count++;

 CPU usage 100% With phoenix 
 

 Key: PHOENIX-1081
 URL: https://issues.apache.org/jira/browse/PHOENIX-1081
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: yang ming
Priority: Critical
 Attachments: JMX.jpg, jstat.jpg, the jstack of all threads, the 
 jstack of thread 12725.jpg, the jstack of thread 12748.jpg, the threads of 
 regionserver process.jpg


 The concurrent of the system is not high,but CPU usage often up to 100%.
 I had stopped the system,but regionserver's CPU usage is still high.
 what can case this problem?
 table row count:6000 million
 table ddl:
 create table if not exists summary
 (
 videoid integer not null,
 date date not null,
 platform varchar not null,
 device varchar not null,
 systemgroup varchar not null,
 system varchar not null,
 vv bigint,
 ts bigint,
 up bigint,
 down bigint,
 comment bigint,
 favori bigint,
 favord bigint,
 quote bigint,
 reply bigint
 constraint pk primary key (videoid, date,platform, device, systemgroup,system)
 )salt_buckets = 30,versions=1,compression='snappy';
 query 1:
 select sum(vv) as sumvv,sum(comment) as sumcomment,sum(up) as sumup,sum(down) 
 as sumdown,sum(reply) as sumreply,count(*) as count from summary(reply 
 bigint) where videoid 
 in(137102991,151113895,171559204,171559439,171573932,171573932,171573932,171574082,171574082,171574164,171677219,171794335,171902734,172364368,172475141,172700554,172700554,172700554,172716705,172784258,172835778,173112067,173165316,173165316,173379601,173448315,173503961,173692664,173911358,174077089,174099017,174349633,174349877,174651474,174651474,174759297,174883566,174883566,174987670,174987670,175131298)
  and date=to_date('2013-09-01','-MM-dd') and 
 date=to_date('2014-07-07','-MM-dd')



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (PHOENIX-1081) CPU usage 100% With phoenix

2014-07-11 Thread yang ming (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059499#comment-14059499
 ] 

yang ming edited comment on PHOENIX-1081 at 7/11/14 11:39 PM:
--

Hbase version is 0.94.19.I had added some debug log,may be the endless loop is 
here.
{color:red}at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390){color}
And the problem may be in this method.
{color:blue}at org.apache.phoenix.filter.SkipScanFilter.navigate{color}
   LOOP: while((kv = this.heap.peek()) != null) {
// Check that the heap gives us KVs in an increasing order.
assert prevKV == null || comparator == null || 
comparator.compare(prevKV, kv) = 0 :
  Key  + prevKV +  followed by a  + smaller key  + kv +  in cf  
+ store;
prevKV = kv;
{color:red}
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
{color}
switch(qcode) {
  case INCLUDE:
  case INCLUDE_AND_SEEK_NEXT_ROW:
  case INCLUDE_AND_SEEK_NEXT_COL:

Filter f = matcher.getFilter();
outResult.add(f == null ? kv : f.transform(kv));
count++;


was (Author: yangming860101):
Hbase version is 0.94.19.I had added some debug log,may be the endless loop is 
here.
{color:red}at 
org.apache.hadoop.hbase.regionserver.StoreScanner.next(StoreScanner.java:390){color}
And the problem may be in this method.
{color:blue}at org.apache.phoenix.filter.SkipScanFilter.navigate{blue}
   LOOP: while((kv = this.heap.peek()) != null) {
// Check that the heap gives us KVs in an increasing order.
assert prevKV == null || comparator == null || 
comparator.compare(prevKV, kv) = 0 :
  Key  + prevKV +  followed by a  + smaller key  + kv +  in cf  
+ store;
prevKV = kv;
{color:red}
ScanQueryMatcher.MatchCode qcode = matcher.match(kv);
{color}
switch(qcode) {
  case INCLUDE:
  case INCLUDE_AND_SEEK_NEXT_ROW:
  case INCLUDE_AND_SEEK_NEXT_COL:

Filter f = matcher.getFilter();
outResult.add(f == null ? kv : f.transform(kv));
count++;

 CPU usage 100% With phoenix 
 

 Key: PHOENIX-1081
 URL: https://issues.apache.org/jira/browse/PHOENIX-1081
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: yang ming
Priority: Critical
 Attachments: JMX.jpg, jstat.jpg, the jstack of all threads, the 
 jstack of thread 12725.jpg, the jstack of thread 12748.jpg, the threads of 
 regionserver process.jpg


 The concurrent of the system is not high,but CPU usage often up to 100%.
 I had stopped the system,but regionserver's CPU usage is still high.
 what can case this problem?
 table row count:6000 million
 table ddl:
 create table if not exists summary
 (
 videoid integer not null,
 date date not null,
 platform varchar not null,
 device varchar not null,
 systemgroup varchar not null,
 system varchar not null,
 vv bigint,
 ts bigint,
 up bigint,
 down bigint,
 comment bigint,
 favori bigint,
 favord bigint,
 quote bigint,
 reply bigint
 constraint pk primary key (videoid, date,platform, device, systemgroup,system)
 )salt_buckets = 30,versions=1,compression='snappy';
 query 1:
 select sum(vv) as sumvv,sum(comment) as sumcomment,sum(up) as sumup,sum(down) 
 as sumdown,sum(reply) as sumreply,count(*) as count from summary(reply 
 bigint) where videoid 
 in(137102991,151113895,171559204,171559439,171573932,171573932,171573932,171574082,171574082,171574164,171677219,171794335,171902734,172364368,172475141,172700554,172700554,172700554,172716705,172784258,172835778,173112067,173165316,173165316,173379601,173448315,173503961,173692664,173911358,174077089,174099017,174349633,174349877,174651474,174651474,174759297,174883566,174883566,174987670,174987670,175131298)
  and date=to_date('2013-09-01','-MM-dd') and 
 date=to_date('2014-07-07','-MM-dd')



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1077) Exception thrown when executing an IN list of Row Value Constructors against salted tables.

2014-07-11 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1077:
--

Description: 
1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 

Details:

Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
SALT_BUCKETS=4

Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, ?, 
?))

Exception:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)

2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN list 
of RVCs. Result - Fails with exception.  

Details:
Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
(tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true

Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t

Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
pk3) IN ((?, ?, ?), (?, ?, ?))

Stacktrace:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)

  was:
IN list of row value constructors doesn't work when queried against tenant 
views for multi-tenant phoenix tables. Consider this test (added in 
TenantSpecificTablesDMLIT.java)

{code}
public void testRVCOnTenantSpecificTable() throws Exception {
Connection conn = nextConnection(PHOENIX_JDBC_TENANT_SPECIFIC_URL);
try {
conn.setAutoCommit(true);
conn.createStatement().executeUpdate(upsert into  + 
TENANT_TABLE_NAME +  (id, user) values (1, 'BonA'));
conn.createStatement().executeUpdate(upsert into  + 
TENANT_TABLE_NAME +  (id, user) values (2, 'BonB'));
conn.createStatement().executeUpdate(upsert into  + 
TENANT_TABLE_NAME +  (id, user) values (3, 'BonC'));

conn.close();

conn = nextConnection(PHOENIX_JDBC_TENANT_SPECIFIC_URL);
PreparedStatement stmt = conn.prepareStatement(select id from  + 
TENANT_TABLE_NAME +  WHERE (id, user) IN ((?, ?), (?, ?), (?, ?)));
stmt.setInt(1, 1);
stmt.setString(2, BonA);
stmt.setInt(3, 2);
stmt.setString(4, BonB);
stmt.setInt(5, 3);
stmt.setString(6, BonC);
ResultSet rs = stmt.executeQuery();
assertTrue(rs.next());
assertEquals(1, rs.getInt(1));
assertTrue(rs.next());
 

[jira] [Updated] (PHOENIX-1077) Exception thrown when executing an IN list of Row Value Constructors against salted tables.

2014-07-11 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1077:
--

Description: 
{code}
1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 

Details:

Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
SALT_BUCKETS=4

Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, ?, 
?))

Exception:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)


2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN list 
of RVCs. Result - Fails with exception.  

Details:
Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
(tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true

Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t

Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
pk3) IN ((?, ?, ?), (?, ?, ?))

Stacktrace:

java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)

{code}

  was:
1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 

Details:

Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
SALT_BUCKETS=4

Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, ?, 
?))

Exception:
java.lang.ArrayIndexOutOfBoundsException: 1
at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
at 
org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
at 
org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
at 
org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)

[jira] [Updated] (PHOENIX-1077) Exception thrown when executing an IN list of Row Value Constructors against salted tables.

2014-07-11 Thread Kyle Buzsaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kyle Buzsaki updated PHOENIX-1077:
--

Attachment: PHOENIX-1077.patch

Attaching fix. Please commit, [~elilevine] or [~jamestaylor].

 Exception thrown when executing an IN list of Row Value Constructors against 
 salted tables.
 ---

 Key: PHOENIX-1077
 URL: https://issues.apache.org/jira/browse/PHOENIX-1077
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Samarth Jain
Assignee: Kyle Buzsaki
 Attachments: PHOENIX-1077.patch


 {code}
 1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 
 Details:
 Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
 pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
 SALT_BUCKETS=4
 Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, 
 ?, ?))
 Exception:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)
 2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN 
 list of RVCs. Result - Fails with exception.  
 Details:
 Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
 NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
 (tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true
 Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t
 Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
 pk3) IN ((?, ?, ?), (?, ?, ?))
 Stacktrace:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (PHOENIX-1082) IN List of RVCs doesn't return all the rows when executed against a tenant-specific view for a multi-tenant table that is salted.

2014-07-11 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-1082:
-

 Summary: IN List of RVCs doesn't return all the rows when executed 
against a tenant-specific view for a multi-tenant table that is salted.
 Key: PHOENIX-1082
 URL: https://issues.apache.org/jira/browse/PHOENIX-1082
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


{code}

Table type - Multitenant and salted. Query - IN list of RVCs. Result - All rows 
not returned.

Base table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
(tenantId,pk2,pk3)) MULTI_TENANT=true, SALT_BUCKETS=4

Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t

Upserts:
upsert into t_view (pk2, pk3, c1) values ('helo1', 1, 1)
upsert into t_view (pk2, pk3, c1) values ('helo2', 2, 2)
upsert into t_view (pk2, pk3, c1) values ('helo3', 3, 3)
upsert into t_view (pk2, pk3, c1) values ('helo4', 4, 4)
upsert into t_view (pk2, pk3, c1) values ('helo5', 5, 5)

Query using tenant specific connection - select pk2, pk3 from t_view WHERE 
(pk2, pk3) IN ( ('helo3',  3),  ('helo5',  5) ) ORDER BY pk2

Result - Only one row returned - helo3, 3 

This has likely to do with salting because on removing SALT_BUCKETS=4 from the 
base table DDL all the expected rows are returned.

{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1077) Exception thrown when executing an IN list of Row Value Constructors against salted tables.

2014-07-11 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059558#comment-14059558
 ] 

Samarth Jain commented on PHOENIX-1077:
---

Looks great [~kbuzsaki]. Fantastic job!

 Exception thrown when executing an IN list of Row Value Constructors against 
 salted tables.
 ---

 Key: PHOENIX-1077
 URL: https://issues.apache.org/jira/browse/PHOENIX-1077
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Samarth Jain
Assignee: Kyle Buzsaki
 Attachments: PHOENIX-1077.patch


 {code}
 1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 
 Details:
 Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
 pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
 SALT_BUCKETS=4
 Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, 
 ?, ?))
 Exception:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)
 2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN 
 list of RVCs. Result - Fails with exception.  
 Details:
 Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
 NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
 (tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true
 Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t
 Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
 pk3) IN ((?, ?, ?), (?, ?, ?))
 Stacktrace:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (PHOENIX-1083) IN list of RVC combined with AND doesn't return expected rows

2014-07-11 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-1083:
--

Summary: IN list of RVC combined with AND doesn't return expected rows  
(was: IN liar of RVC combined with AND doesn't return expected rows)

 IN list of RVC combined with AND doesn't return expected rows
 -

 Key: PHOENIX-1083
 URL: https://issues.apache.org/jira/browse/PHOENIX-1083
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Samarth Jain
Assignee: James Taylor

 {code}
 CREATE TABLE in_test ( user VARCHAR, tenant_id VARCHAR(5) NOT 
 NULL,tenant_type_id VARCHAR(3) NOT NULL,  id INTEGER NOT NULL CONSTRAINT pk 
 PRIMARY KEY (tenant_id, tenant_type_id, id))
 upsert into in_test (tenant_id, tenant_type_id, id, user) values ('a', 'a', 
 1, 'BonA')
 upsert into in_test (tenant_id, tenant_type_id, id, user) values ('a', 'a', 
 2, 'BonB')
 select id from in_test WHERE tenant_id = 'a' and tenant_type_id = 'a' and 
 ((id, user) IN ((1, 'BonA'),(1, 'BonA')))
 Rows returned - none. Should have returned one row. 
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[GitHub] phoenix pull request: PHOENIX-933 Local index support to Phoenix

2014-07-11 Thread chrajeshbabu
Github user chrajeshbabu commented on a diff in the pull request:

https://github.com/apache/phoenix/pull/1#discussion_r14850777
  
--- Diff: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java
 ---
@@ -366,6 +384,21 @@ private RegionScanner 
scanUnordered(ObserverContextRegionCoprocessorEnvironment
 env, ScanUtil.getTenantId(scan), 
 aggregators, estDistVals);
 
+byte[] localIndexBytes = scan.getAttribute(LOCAL_INDEX_BUILD);
--- End diff --

Moved the changes outside of scanOrdered/scanUnordered and passing through 
necessary info through calls.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (PHOENIX-1075) Mathematical order of operations are improperly evaluated.

2014-07-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059652#comment-14059652
 ] 

Hudson commented on PHOENIX-1075:
-

SUCCESS: Integrated in Phoenix | 3.0 | Hadoop1 #132 (See 
[https://builds.apache.org/job/Phoenix-3.0-hadoop1/132/])
PHOENIX-1075 Mathematical order of operations are improperly evaluated. (Kyle 
Buzsaki) (anoopsamjohn: rev 92aceb3674a7b1511d7660583e51407f08c60117)
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ArithmeticQueryIT.java
* phoenix-core/src/main/antlr3/PhoenixSQL.g


 Mathematical order of operations are improperly evaluated.
 --

 Key: PHOENIX-1075
 URL: https://issues.apache.org/jira/browse/PHOENIX-1075
 Project: Phoenix
  Issue Type: Bug
Reporter: Kyle Buzsaki
Assignee: Kyle Buzsaki
 Fix For: 5.0.0, 3.1, 4.1

 Attachments: PHOENIX-1075.patch, PHOENIX-1075_2.patch


 The root of the issue is that, as things are now, multiplication and division 
 don't actually have the same precedence in the grammar. Division is always 
 grouped more tightly than multiplication and is evaluated first. Most of the 
 time, this doesn't matter, but combined with the truncating integer division 
 used by LongDivideExpression it produces some unexpected and probably wrong 
 behavior. Below is an example:
 Expression: 6 * 4 / 3
 Evaluating left to right, this should reduce as follows:
 6 * 4 / 3 
 24 / 3
 8
 As phoenix is now, division has a higher precedence than multiplication. 
 Therefore, the resulting expression tree looks like this:
 !http://i.imgur.com/2Zzsfpy.png!
 Because integer division in truncating, when the division evaluates the 
 expression tree looks like this:
 !http://i.imgur.com/3cLGD0e.png!
 Which then evaluates to 6.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (PHOENIX-1077) Exception thrown when executing an IN list of Row Value Constructors against salted tables.

2014-07-11 Thread Kyle Buzsaki (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14059653#comment-14059653
 ] 

Kyle Buzsaki commented on PHOENIX-1077:
---

How does that testcase fail? The patch I provided fixes the 
ArrayIndexOutOfBoundsException error. It does not fix the issue of some rows 
not being returned, which appears to be a different bug. Samarth filed separate 
JIRAs at PHOENIX-1082 and PHOENIX-1083 for the testcases that were still 
failing after this change.

 Exception thrown when executing an IN list of Row Value Constructors against 
 salted tables.
 ---

 Key: PHOENIX-1077
 URL: https://issues.apache.org/jira/browse/PHOENIX-1077
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 3.0.0, 4.0.0, 5.0.0
Reporter: Samarth Jain
Assignee: Kyle Buzsaki
 Attachments: PHOENIX-1077.patch


 {code}
 1) Table - Salted. Query - IN list of RVCs. Result - fails with exception. 
 Details:
 Table DDL - CREATE TABLE t (pk1 varchar(5) NOT NULL, pk2 varchar(5) NOT NULL, 
 pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key (pk1,pk2,pk3)) 
 SALT_BUCKETS=4
 Query - select pk1, pk2, pk3 from t WHERE (pk1, pk2, pk3) IN ((?, ?, ?), (?, 
 ?, ?))
 Exception:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC5(RowValueConstructorIT.java:1078)
 2) Most likely related to 1)  Table - multi-tenant and salted. Query - IN 
 list of RVCs. Result - Fails with exception.  
 Details:
 Base Table DDL - CREATE TABLE t (tenantId varchar(5) NOT NULL, pk2 varchar(5) 
 NOT NULL, pk3 INTEGER NOT NULL, c1 INTEGER constraint pk primary key 
 (tenantId,pk2,pk3)) SALT_BUCKETS=4, MULTI_TENANT=true
 Tenant View DDL - CREATE VIEW t_view (tenant_col VARCHAR) AS SELECT * FROM t
 Query using global connection : select pk2, pk3 from t WHERE (tenantId, pk2, 
 pk3) IN ((?, ?, ?), (?, ?, ?))
 Stacktrace:
 java.lang.ArrayIndexOutOfBoundsException: 1
   at org.apache.phoenix.schema.ValueSchema.getField(ValueSchema.java:300)
   at org.apache.phoenix.util.ScanUtil.setKey(ScanUtil.java:260)
   at 
 org.apache.phoenix.compile.ScanRanges.getPointKeys(ScanRanges.java:185)
   at org.apache.phoenix.compile.ScanRanges.create(ScanRanges.java:61)
   at 
 org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:224)
   at 
 org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:105)
   at 
 org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:260)
   at 
 org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:128)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:264)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:1)
   at 
 org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:199)
   at 
 org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeQuery(PhoenixPreparedStatement.java:157)
   at 
 org.apache.phoenix.end2end.RowValueConstructorIT.testInListOfRVC4(RowValueConstructorIT.java:1042)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)