[jira] [Commented] (PHOENIX-4602) OrExpression should can also push non-leading pk columns to scan

2018-02-13 Thread John Leach (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362510#comment-16362510
 ] 

John Leach commented on PHOENIX-4602:
-

[~comnetwork] Thank you for the pointer in the code!

> OrExpression should can also push non-leading pk columns to scan
> 
>
> Key: PHOENIX-4602
> URL: https://issues.apache.org/jira/browse/PHOENIX-4602
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: chenglei
>Priority: Major
> Attachments: PHOENIX-4602_v1.patch
>
>
> Given following table:
> {code}
>     CREATE TABLE test_table (
>      PK1 INTEGER NOT NULL,
>      PK2 INTEGER NOT NULL,
>      PK3 INTEGER NOT NULL,
>  DATA INTEGER, 
>      CONSTRAINT TEST_PK PRIMARY KEY (PK1,PK2,PK3))
> {code}
> and a sql:
> {code}
>   select * from test_table t where (t.pk1 >=2 and t.pk1<5) and ((t.pk2 >= 4 
> and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))
> {code}
> Obviously, it is a typical case for the sql to use SkipScanFilter,however, 
> the sql actually does not use Skip Scan, it use Range Scan and just push the 
> leading pk column expression {{(t.pk1 >=2 and t.pk1<5)}} to scan,the explain 
> sql is :
>  {code:sql}
>    CLIENT PARALLEL 1-WAY RANGE SCAN OVER TEST_TABLE [2] - [5]
>    SERVER FILTER BY ((PK2 >= 4 AND PK2 < 6) OR (PK2 >= 8 AND PK2 < 9))
> {code}
>  I think the problem is affected by the 
> {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, in the following 
> line 763, because the pk2 column is not the leading pk column,so this method 
> return null, causing the expression 
> {{((t.pk2 >= 4 and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))}} is not  pushed 
> to scan:
> {code:java}
> 757    boolean hasFirstSlot = true;
> 758    boolean prevIsNull = false;
> 759    // TODO: Do the same optimization that we do for IN if the childSlots 
> specify a fully qualified row key
> 760   for (KeySlot slot : childSlot) {
> 761      if (hasFirstSlot) {
> 762           // if the first slot is null, return null immediately
> 763           if (slot == null) {
> 764                return null;
> 765            }
> 766           // mark that we've handled the first slot
> 767           hasFirstSlot = false;
> 768      }
> {code}
> For above {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, it seems 
> that it is not necessary to make sure the PK Column in OrExpression is 
> leading PK Column,just guarantee there is only one PK Column in OrExpression 
> is enough.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4602) OrExpression should can also push non-leading pk columns to scan

2018-02-13 Thread John Leach (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16362414#comment-16362414
 ] 

John Leach commented on PHOENIX-4602:
-

[~comnetwork] I am new to Phoenix, but when I look at the WhereOptimizer.java 
it is not clear to me how or when the predicates are moved to Conjunctive 
Normal Form ([https://en.wikipedia.org/wiki/Conjunctive_normal_form).]  I have 
always seen the following process when dealing with predicates. 

1.  Move to Conjunctive Normal Form all predicates.

2.  Mark predicates on the key.

3.  Apply a function on the key predicates to assemble a set of scans.

4.  Apply a function on the remaining predicates to assemble a filter (Usually 
in list is an exception case).

Do you know where CNF occurs?

> OrExpression should can also push non-leading pk columns to scan
> 
>
> Key: PHOENIX-4602
> URL: https://issues.apache.org/jira/browse/PHOENIX-4602
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.13.0
>Reporter: chenglei
>Priority: Major
> Attachments: PHOENIX-4602_v1.patch
>
>
> Given following table:
> {code}
>     CREATE TABLE test_table (
>      PK1 INTEGER NOT NULL,
>      PK2 INTEGER NOT NULL,
>      PK3 INTEGER NOT NULL,
>  DATA INTEGER, 
>      CONSTRAINT TEST_PK PRIMARY KEY (PK1,PK2,PK3))
> {code}
> and a sql:
> {code}
>   select * from test_table t where (t.pk1 >=2 and t.pk1<5) and ((t.pk2 >= 4 
> and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))
> {code}
> Obviously, it is a typical case for the sql to use SkipScanFilter,however, 
> the sql actually does not use Skip Scan, it use Range Scan and just push the 
> leading pk column expression {{(t.pk1 >=2 and t.pk1<5)}} to scan,the explain 
> sql is :
>  {code:sql}
>    CLIENT PARALLEL 1-WAY RANGE SCAN OVER TEST_TABLE [2] - [5]
>    SERVER FILTER BY ((PK2 >= 4 AND PK2 < 6) OR (PK2 >= 8 AND PK2 < 9))
> {code}
>  I think the problem is affected by the 
> {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, in the following 
> line 763, because the pk2 column is not the leading pk column,so this method 
> return null, causing the expression 
> {{((t.pk2 >= 4 and t.pk2 <6) or (t.pk2 >= 8 and t.pk2 <9))}} is not  pushed 
> to scan:
> {code:java}
> 757    boolean hasFirstSlot = true;
> 758    boolean prevIsNull = false;
> 759    // TODO: Do the same optimization that we do for IN if the childSlots 
> specify a fully qualified row key
> 760   for (KeySlot slot : childSlot) {
> 761      if (hasFirstSlot) {
> 762           // if the first slot is null, return null immediately
> 763           if (slot == null) {
> 764                return null;
> 765            }
> 766           // mark that we've handled the first slot
> 767           hasFirstSlot = false;
> 768      }
> {code}
> For above {{WhereOptimizer.KeyExpressionVisitor.orKeySlots}} method, it seems 
> that it is not necessary to make sure the PK Column in OrExpression is 
> leading PK Column,just guarantee there is only one PK Column in OrExpression 
> is enough.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-3321) TPCH 100G: Query 21 Missing Equi-Join Support

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3321:
---

 Summary: TPCH 100G: Query 21 Missing Equi-Join Support
 Key: PHOENIX-3321
 URL: https://issues.apache.org/jira/browse/PHOENIX-3321
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:55:54 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:55:54 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
S_NAME, 
COUNT(*) AS NUMWAIT 
FROM 
TPCH.SUPPLIER, 
TPCH.LINEITEM L1, 
TPCH.ORDERS, 
TPCH.NATION 
WHERE 
S_SUPPKEY = L1.L_SUPPKEY 
AND O_ORDERKEY = L1.L_ORDERKEY 
AND O_ORDERSTATUS = 'F' 
AND L1.L_RECEIPTDATE > L1.L_COMMITDATE 
AND EXISTS( 
SELECT * 
FROM 
TPCH.LINEITEM L2 
WHERE 
L2.L_ORDERKEY = L1.L_ORDERKEY 
AND L2.L_SUPPKEY <> L1.L_SUPPKEY 
) 
AND NOT EXISTS( 
SELECT * 
FROM 
TPCH.LINEITEM L3 
WHERE 
L3.L_ORDERKEY = L1.L_ORDERKEY 
AND L3.L_SUPPKEY <> L1.L_SUPPKEY 
AND L3.L_RECEIPTDATE > L3.L_COMMITDATE 
) 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'SAUDI ARABIA' 
GROUP BY 
S_NAME 
ORDER BY 
NUMWAIT DESC, 
S_NAME 
LIMIT 100 
;
Error: Does not support non-standard or non-equi correlated-subquery 
conditions. (state=,code=0)
java.sql.SQLFeatureNotSupportedException: Does not support non-standard or 
non-equi correlated-subquery conditions.
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.leaveBooleanNode(SubqueryRewriter.java:485)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:505)
at 
org.apache.phoenix.compile.SubqueryRewriter$JoinConditionExtractor.visitLeave(SubqueryRewriter.java:411)
at 
org.apache.phoenix.parse.ComparisonParseNode.accept(ComparisonParseNode.java:47)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:168)
at 
org.apache.phoenix.compile.SubqueryRewriter.visitLeave(SubqueryRewriter.java:70)
at 
org.apache.phoenix.parse.ExistsParseNode.accept(ExistsParseNode.java:53)
at 
org.apache.phoenix.parse.CompoundParseNode.acceptChildren(CompoundParseNode.java:64)
at org.apache.phoenix.parse.AndParseNode.accept(AndParseNode.java:47)
at 
org.apache.phoenix.parse.ParseNodeRewriter.rewrite(ParseNodeRewriter.java:48)
at 
org.apache.phoenix.compile.SubqueryRewriter.transform(SubqueryRewriter.java:84)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:399)
at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:271)
at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:265)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1444)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Aborting command set because "force" is false and command failed: "SELECT 
S_NAME, 
COUNT(*) AS NUMWAIT 
FROM 
TPCH.SUPPLIER, 
TPCH.LINEITEM L1, 
TPCH.ORDERS, 
TPCH.NATION 
WHERE 
S_SUPPKEY = L1.L_SU

[jira] [Created] (PHOENIX-3320) TPCH 100G: Query 20 Execution Exception

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3320:
---

 Summary: TPCH 100G: Query 20 Execution Exception
 Key: PHOENIX-3320
 URL: https://issues.apache.org/jira/browse/PHOENIX-3320
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:52:21 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:52:22 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
S_NAME, 
S_ADDRESS 
FROM 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
S_SUPPKEY IN ( 
SELECT PS_SUPPKEY 
FROM 
TPCH.PARTSUPP 
WHERE 
PS_PARTKEY IN ( 
SELECT P_PARTKEY 
FROM 
TPCH.PART 
WHERE 
P_NAME LIKE 'FOREST%' 
) 
AND PS_AVAILQTY > ( 
SELECT 0.5 * SUM(L_QUANTITY) 
FROM 
TPCH.LINEITEM 
WHERE 
L_PARTKEY = PS_PARTKEY 
AND L_SUPPKEY = PS_SUPPKEY 
AND L_SHIPDATE >= TO_DATE('1994-01-01') 
AND L_SHIPDATE < TO_DATE('1995-01-01') 
) 
) 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'CANADA' 
ORDER BY 
S_NAME 
;
16/09/13 20:55:51 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to 
stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 failed on local exception: 
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to 
stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. Call id=411932, 
waitTime=149848
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:275)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1239)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:217)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:318)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:32831)
at 
org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:356)
at 
org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:196)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:144)
at 
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:59)
at 
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200)
at 
org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320)
at 
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:258)
at 
org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:241)
at 
org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:534)
at 
org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364)
at 
org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
at 
org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:126)
at 
org.apache.phoenix.iterate.OrderedResultIterator.getResultIterator(OrderedResultIterator.java:254)
at 
org.apache.phoenix.iterate.OrderedResultIterator.peek(OrderedResultIterator.java:277)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: 
Connection to stl-colo-srv139.splicemachine.colo/10.1.1.239:16020 is closing. 
Call id=411932, waitTime=149848
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1057)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:856)
at 
org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:575)
16/09/13 20:55:51 W

[jira] [Created] (PHOENIX-3319) TPCH 100G: Query 15 with view cannot parse

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3319:
---

 Summary: TPCH 100G: Query 15 with view cannot parse
 Key: PHOENIX-3319
 URL: https://issues.apache.org/jira/browse/PHOENIX-3319
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:45:42 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:45:43 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/5  CREATE VIEW REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE 
DECIMAL(15,2)) AS 
SELECT 
L_SUPPKEY, 
SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
FROM 
TPCH.LINEITEM 
WHERE 
L_SHIPDATE >= TO_DATE('1996-01-01') 
AND L_SHIPDATE < TO_DATE('1996-04-01') 
GROUP BY 
L_SUPPKEY;
Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "ASTERISK", 
got "L_SUPPKEY" at line 3, column 1. (state=42P00,code=604)
org.apache.phoenix.exception.PhoenixParserException: ERROR 604 (42P00): Syntax 
error. Mismatched input. Expecting "ASTERISK", got "L_SUPPKEY" at line 3, 
column 1.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MismatchedTokenException(99!=13)
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:360)
at 
org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.create_view_node(PhoenixSQLParser.java:1336)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:834)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:508)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 18 more
Aborting command set because "force" is false and command failed: "CREATE VIEW 
REVENUE0 (SUPPLIER_NO INTEGER, TOTAL_REVENUE DECIMAL(15,2)) AS 
SELECT 
L_SUPPKEY, 
SUM(L_EXTENDEDPRICE * (1 - L_DISCOUNT)) 
FROM 
TPCH.LINEITEM 
WHERE 
L_SHIPDATE >= TO_DATE('1996-01-01') 
AND L_SHIPDATE < TO_DATE('1996-04-01') 
GROUP BY 
L_SUPPKEY;"
Closing: org.apache.phoenix.jdbc.PhoenixConnection

{NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3318) TPCH 100G: Query 13 Cannot Parse

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3318:
---

 Summary: TPCH 100G: Query 13 Cannot Parse
 Key: PHOENIX-3318
 URL: https://issues.apache.org/jira/browse/PHOENIX-3318
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/13 20:45:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/13 20:45:31 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
C_COUNT, 
COUNT(*) AS CUSTDIST 
FROM 
( 
SELECT 
C_CUSTKEY, 
COUNT(O_ORDERKEY) 
FROM 
TPCH.CUSTOMER 
LEFT OUTER JOIN TPCH.ORDERS ON 
C_CUSTKEY = O_CUSTKEY 
AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
GROUP BY 
C_CUSTKEY 
) AS C_ORDERS 
(C_CUSTKEY, C_COUNT) 
GROUP BY 
C_COUNT 
ORDER BY 
CUSTDIST DESC, 
C_COUNT DESC 
;
Error: ERROR 602 (42P00): Syntax error. Missing "EOF" at line 17, column 1. 
(state=42P00,code=602)
org.apache.phoenix.exception.PhoenixParserException: ERROR 602 (42P00): Syntax 
error. Missing "EOF" at line 17, column 1.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1280)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1434)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:807)
at sqlline.SqlLine.runCommands(SqlLine.java:1710)
at sqlline.Commands.run(Commands.java:1285)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:803)
at sqlline.SqlLine.initArgs(SqlLine.java:613)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: MissingTokenException(inserted [@-1,0:0='',<-1>,17:0] 
at ()
at 
org.apache.phoenix.parse.PhoenixSQLParser.recoverFromMismatchedToken(PhoenixSQLParser.java:358)
at 
org.apache.phoenix.shaded.org.antlr.runtime.BaseRecognizer.match(BaseRecognizer.java:115)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:518)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 18 more
Aborting command set because "force" is false and command failed: "SELECT 
C_COUNT, 
COUNT(*) AS CUSTDIST 
FROM 
( 
SELECT 
C_CUSTKEY, 
COUNT(O_ORDERKEY) 
FROM 
TPCH.CUSTOMER 
LEFT OUTER JOIN TPCH.ORDERS ON 
C_CUSTKEY = O_CUSTKEY 
AND O_COMMENT NOT LIKE '%SPECIAL%REQUESTS%' 
GROUP BY 
C_CUSTKEY 
) AS C_ORDERS 
(C_CUSTKEY, C_COUNT) 
GROUP BY 
C_COUNT 
ORDER BY 
CUSTDIST DESC, 
C_COUNT DESC 
;"
Closing: org.apache.phoenix.jdbc.PhoenixConnection
{NOFORMAT}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-3317) TPCH 100G: Query 11 Could Not Find Hash Cache For Join ID

2016-09-22 Thread John Leach (JIRA)
John Leach created PHOENIX-3317:
---

 Summary: TPCH 100G: Query 11 Could Not Find Hash Cache For Join ID
 Key: PHOENIX-3317
 URL: https://issues.apache.org/jira/browse/PHOENIX-3317
 Project: Phoenix
  Issue Type: Bug
Reporter: John Leach


{NOFORMAT}
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/phoenix/apache-phoenix-4.8.0-HBase-1.1-bin/phoenix-4.8.0-HBase-1.1-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
16/09/14 04:54:31 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
16/09/14 04:54:32 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.
1/1  SELECT 
PS_PARTKEY, 
SUM(PS_SUPPLYCOST * PS_AVAILQTY) AS VAL 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
PS_SUPPKEY = S_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'GERMANY' 
GROUP BY 
PS_PARTKEY 
HAVING 
SUM(PS_SUPPLYCOST * PS_AVAILQTY) > ( 
SELECT SUM(PS_SUPPLYCOST * PS_AVAILQTY) * 0.01 
FROM 
TPCH.PARTSUPP, 
TPCH.SUPPLIER, 
TPCH.NATION 
WHERE 
PS_SUPPKEY = S_SUPPKEY 
AND S_NATIONKEY = N_NATIONKEY 
AND N_NAME = 'GERMANY' 
) 
ORDER BY 
VAL DESC 
;
16/09/14 04:55:05 WARN execute.HashJoinPlan: Hash plan [0] execution seems too 
slow. Earlier hash cache(s) might have expired on servers.
Error: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: �(��˻�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:99)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:148)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: �(��˻�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:99)
at 
org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver.doPostScannerOpen(GroupedAggregateRegionObserver.java:148)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:215)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1318)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1748)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1712)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1313)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2261)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
at org.