[jira] [Resolved] (PHOENIX-1910) Sort out maven assembly dependencies

2015-11-06 Thread Jesse Yates (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesse Yates resolved PHOENIX-1910.
--
Resolution: Won't Fix

> Sort out maven assembly dependencies
> 
>
> Key: PHOENIX-1910
> URL: https://issues.apache.org/jira/browse/PHOENIX-1910
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Cody Marcel
>Assignee: Jesse Yates
>Priority: Minor
>
> It's unclear how to correctly add a dependency for maven assembly. Moving the 
> module last is a temp work around, but we should figure out a more explicit 
> way.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994666#comment-14994666
 ] 

James Taylor commented on PHOENIX-2381:
---

bq. I don't know if Phoenix is pooling its connections to HBase or how to check 
that.
Phoenix shares the same HConnection (underlying connection to HBase) across all 
connections on the same Driver instance (i.e. across the JVM) for the same 
"user" in the connection URL - think of "user" as a "user-case" or QoS 
designation.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994650#comment-14994650
 ] 

James Taylor commented on PHOENIX-2381:
---

Agree - if it happens in sqlline, then it's not related to pooling. Pooling 
connections per-tenant might be ok, but there'd be corner cases if you're 
specifying any other connection properties. I've filed PHOENIX-2388 for this, 
but it appears to be orthogonal to this issue. Would be a good, self-contained 
contribution IMHO.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2388) Support pooling Phoenix connections

2015-11-06 Thread James Taylor (JIRA)
James Taylor created PHOENIX-2388:
-

 Summary: Support pooling Phoenix connections
 Key: PHOENIX-2388
 URL: https://issues.apache.org/jira/browse/PHOENIX-2388
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor


Frequently user are plugging Phoenix into an ecosystem that pools connections. 
We should support this by refactoring out the code in PhoenixConnection 
constructor (and call when taking something out of a pool) and 
PhoenixConnection.close (and call when putting something into a pool).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Don Brinn (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994496#comment-14994496
 ] 

Don Brinn edited comment on PHOENIX-2381 at 11/6/15 10:27 PM:
--

Hi [~maryannxue].  

Thanks!

1. I am working to get legal clearance to attach the view definitions.  I will 
do that as soon as I can, probably on Monday.
2. I do not know how to turn on the client log to set its log level, or where 
to find the logs.  I have not been able to find information online about how to 
do that.
3. I have never patched Phoenix nor built it before.  I will work on doing that 
starting on Monday.
4. We pool connections in some applications for connecting to Phoenix.  But I 
also see this problem when connecting to Phoenix using sqlline.  I don't know 
if Phoenix is pooling its connections to HBase or how to check that.


was (Author: dbrinn):
Hi [~maryannxue].  

Thanks!

1. I am working to get legal clearance to attach the view definitions.  I will 
do that as soon as I can, probably on Monday.
2. I do not know how to turn on the client log to set its log level, or where 
to find the logs.  I have not been able to find information online about how to 
do that.
3. I have never patched Phoenix nor built it before.  I will work on doing that 
starting on Monday.
4. We pool connections in some applications for connecting to Phoenix.  But I 
also see this problem when connection to Phoenix using sqlline.  I don't know 
if Phoenix is pooling its connections to HBase or how to check that.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is 

[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Don Brinn (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994574#comment-14994574
 ] 

Don Brinn commented on PHOENIX-2381:


[~jamestaylor] do you mean connections *to* Phoenix?  When we pool connections 
to Phoenix, we pool them per-tenant (i.e. do not try to re-use a connection 
that was established for one tenant for use by a different tenant).  Also, I am 
seeing this problem even when I connect to Phoenix using sqlline.py, specifying 
the TenantId property when establishing the connection.  I have been assuming 
that with sqlline there is no connection pooling, but I really have very little 
experience with Phoenix and tools to connect to it.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994554#comment-14994554
 ] 

James Taylor commented on PHOENIX-2381:
---

Phoenix connections are not meant to be pooled. This would be particularly 
problematic for multi-tenant connections which establish the tenant through a 
connection property.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2387) Phoenix sandbox doesn't launch

2015-11-06 Thread Josh Mahonin (JIRA)
Josh Mahonin created PHOENIX-2387:
-

 Summary: Phoenix sandbox doesn't launch
 Key: PHOENIX-2387
 URL: https://issues.apache.org/jira/browse/PHOENIX-2387
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.6.0
Reporter: Josh Mahonin
Priority: Minor


Just tried the stock 4.6.0 download to try check a user issue and ran the 
sandbox. This was the output:

./phoenix_sandbox.py 
Traceback (most recent call last):
  File "./phoenix_sandbox.py", line 36, in 
sys.err.write("cached_classpath.txt is not present under "
AttributeError: 'module' object has no attribute 'err'

There's no 'cached_classpath.txt' file in the folder



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Don Brinn (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994496#comment-14994496
 ] 

Don Brinn commented on PHOENIX-2381:


Hi [~maryannxue].  

Thanks!

1. I am working to get legal clearance to attach the view definitions.  I will 
do that as soon as I can, probably on Monday.
2. I do not know how to turn on the client log to set its log level, or where 
to find the logs.  I have not been able to find information online about how to 
do that.
3. I have never patched Phoenix nor built it before.  I will work on doing that 
starting on Monday.
4. We pool connections in some applications for connecting to Phoenix.  But I 
also see this problem when connection to Phoenix using sqlline.  I don't know 
if Phoenix is pooling its connections to HBase or how to check that.

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This

[jira] [Commented] (PHOENIX-2373) Change ReserveNSequence Udf to take in zookeeper and tentantId as param

2015-11-06 Thread Siddhi Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994423#comment-14994423
 ] 

Siddhi Mehta commented on PHOENIX-2373:
---

[~maghamraviki...@gmail.com] The Apache license header was placed after package 
declaration in ReserveNSequence.java.
I moved it up to before the package declaration. 
Is that incorrect order?

> Change ReserveNSequence Udf to take in zookeeper and tentantId as param
> ---
>
> Key: PHOENIX-2373
> URL: https://issues.apache.org/jira/browse/PHOENIX-2373
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Siddhi Mehta
>Assignee: Siddhi Mehta
>Priority: Minor
> Attachments: PHOENIX-2373.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Currently the UDF reads zookeeper quorum for tuple value and tenantId is 
> passed in from the jobConf.
> Instead wanted to make a change for the UDF to take both zookeeper quorum and 
> tenantId as params passed to the UDF explicitly



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2288) Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame

2015-11-06 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14994204#comment-14994204
 ] 

Josh Mahonin commented on PHOENIX-2288:
---

[~jamestaylor] For review please!

> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark 
> DataFrame
> -
>
> Key: PHOENIX-2288
> URL: https://issues.apache.org/jira/browse/PHOENIX-2288
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Josh Mahonin
> Attachments: PHOENIX-2288-v2.patch, PHOENIX-2288.patch
>
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, 
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying 
> column. These appear to be exposed in the ResultSetMetaData, but if there was 
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2288) Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark DataFrame

2015-11-06 Thread Josh Mahonin (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Mahonin updated PHOENIX-2288:
--
Attachment: PHOENIX-2288-v2.patch

Fixed type comparisons in ColumnInfo.create()

Fleshed out ColumnInfo.toTypeString() to include all data types with precision 
/ scale.

Added recommended unit tests for arrays. Previous unit tests covered fixed and 
var length chars.

> Phoenix-Spark: PDecimal precision and scale aren't carried through to Spark 
> DataFrame
> -
>
> Key: PHOENIX-2288
> URL: https://issues.apache.org/jira/browse/PHOENIX-2288
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Josh Mahonin
> Attachments: PHOENIX-2288-v2.patch, PHOENIX-2288.patch
>
>
> When loading a Spark dataframe from a Phoenix table with a 'DECIMAL' type, 
> the underlying precision and scale aren't carried forward to Spark.
> The Spark catalyst schema converter should load these from the underlying 
> column. These appear to be exposed in the ResultSetMetaData, but if there was 
> a way to expose these somehow through ColumnInfo, it would be cleaner.
> I'm not sure if Pig has the same issues or not, but I suspect it may.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993983#comment-14993983
 ] 

Maryann Xue commented on PHOENIX-2381:
--

[~dbrinn] Do you also pool connections?

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Don Brinn (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Don Brinn updated PHOENIX-2381:
---
Description: 
I am seeing the following error when doing an INNER JOIN of a view with 
MULTI_TENANT=true with any other table or view:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: Ys�0��%�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
at 
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
 
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
at sqlline.SqlLine.print(SqlLine.java:1653)
at sqlline.Commands.execute(Commands.java:833)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:808)
at sqlline.SqlLine.begin(SqlLine.java:681)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
 
This is with Phoenix version 4.6.0 and HBase version 
0.98.4.2.2.6.0-2800-hadoop2.
 
This seems very strongly related to the MULTI_TENANT=true property on a view or 
table.  I see the error whenever the view has MULTI_TENANT=true and I have a 
tenant-specific connection to Phoenix.  I do not see the problem if the 
MULTI_TENANT=true property is not set on the view or if I do not have a 
tenant-specific connection to Phoenix.
 
Here is an example SQL statement that has this error when the view INVENTORY 
has the MULTI_TENANT=true property and I have a tenant-specific connection, but 
that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
 
Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
work, e.g.  SELECT /\*+ USE_SORT_MERGE_JOIN\*/ * FROM INVENTORY INNER JOIN 
PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
 
This seems to be the same problem that was discussed previously in this mailing 
list:  
https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
 


  was:
I am seeing the following error when doing an INNER JOIN of a view with 
MULTI_TENANT=true with any other table or view:
java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
joinId: Ys�0��%�. The cache might have expired and have been removed.
at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
at 
org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
at 
org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
 
at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
 

[jira] [Commented] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993890#comment-14993890
 ] 

Maryann Xue commented on PHOENIX-2381:
--

Hi, [~dbrinn], I was trying to reproduce this problem but haven't got there 
yet. So would you mind sharing the table/view definition and helping me do a 
few more tests?

1. Turn on the client log and set log level to "DEBUG" and find a message 
containing "NOT adding cache entry to be sent for " and post the entire message 
here together with the tenant id you used in the connection.
2. Could you try the temporary patch "tmp-2381.patch" and see if the problem 
persists?

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2381) Inner Join with any table or view with Multi_Tenant=true causes "could not find hash cache for joinId" error

2015-11-06 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue updated PHOENIX-2381:
-
Attachment: tmp-2381.patch

> Inner Join with any table or view with Multi_Tenant=true causes "could not 
> find hash cache for joinId" error
> 
>
> Key: PHOENIX-2381
> URL: https://issues.apache.org/jira/browse/PHOENIX-2381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
> Environment: This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>Reporter: Don Brinn
>Assignee: Maryann Xue
>  Labels: join, joins, multi-tenant
> Attachments: tmp-2381.patch
>
>
> I am seeing the following error when doing an INNER JOIN of a view with 
> MULTI_TENANT=true with any other table or view:
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: Ys�0��%�. The cache might have expired and have been removed.
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:95)
> at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:212)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1931)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.scan(HRegionServer.java:3178)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29994)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
> at java.lang.Thread.run(Thread.java:745)
>  
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>  
> This is with Phoenix version 4.6.0 and HBase version 
> 0.98.4.2.2.6.0-2800-hadoop2.
>  
> This seems very strongly related to the MULTI_TENANT=true property on a view 
> or table.  I see the error whenever the view has MULTI_TENANT=true and I have 
> a tenant-specific connection to Phoenix.  I do not see the problem if the 
> MULTI_TENANT=true property is not set on the view or if I do not have a 
> tenant-specific connection to Phoenix.
>  
> Here is an example SQL statement that has this error when the view INVENTORY 
> has the MULTI_TENANT=true property and I have a tenant-specific connection, 
> but that succeeds in other cases. (The view PRODUCT_IDS is not Multi-Tenant.)
> SELECT * FROM INVENTORY INNER JOIN PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID)
>  
> Note:  "INNER JOIN" fails under these conditions, as does "LEFT OUTER JOIN".  
> However, "RIGHT OUTER JOIN" and "FULL OUTER JOIN" do work.  Also, if I tell 
> Phoenix to use a Sort Join for the Inner Join or Left Outer Join then it does 
> work, e.g.  SELECT /*+ USE_SORT_MERGE_JOIN*/ * FROM INVENTORY INNER JOIN 
> PRODUCT_IDS ON (PRODUCT_ID = INVENTORY.ID); works.
>  
> This seems to be the same problem that was discussed previously in this 
> mailing list:  
> https://mail-archives.apache.org/mod_mbox/phoenix-user/201507.mbox/%3ccaotkwx5xfbwkjf--0k-zj91tfdqwfq6rmuqw0r_lojcnj1a...@mail.gmail.com%3E
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2372) PhoenixResultSet.getDate(int, Calendar) causes NPE on a null value

2015-11-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2372?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993862#comment-14993862
 ] 

James Taylor commented on PHOENIX-2372:
---

+1. Thanks, [~elserj]. Anyone out there have time to commit to 4.x and master 
branches? [~ndimiduk]?

> PhoenixResultSet.getDate(int, Calendar) causes NPE on a null value
> --
>
> Key: PHOENIX-2372
> URL: https://issues.apache.org/jira/browse/PHOENIX-2372
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0, 4.5.0, 4.6.0
>Reporter: Josh Elser
>Assignee: Josh Elser
> Attachments: PHOENIX-2372.001.patch, PHOENIX-2372.002.patch, 
> PHOENIX-2372.003.patch
>
>
> Ran a simple query through PQS:
> {code}
> select * from system.stats;
> {code}
> and got back a stack trace (trimmed for relevance)
> {noformat}
> java.lang.NullPointerException
> at java.util.Calendar.setTime(Calendar.java:1770)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getDate(PhoenixResultSet.java:377)
> at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.getValue(JdbcResultSet.java:172)
> at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:142)
> {noformat}
> It looks like the {{getDate(int, Calendar)}} method on PhoenixResultSet 
> doesn't check the value before passing it into the calendar.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2301) NullPointerException when upserting into a char array column

2015-11-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993833#comment-14993833
 ] 

James Taylor commented on PHOENIX-2301:
---

Thanks for the explanation, [~Dumindux]. +1 to your fix. Please commit to 4.x 
and master branches.

> NullPointerException when upserting into a char array column
> 
>
> Key: PHOENIX-2301
> URL: https://issues.apache.org/jira/browse/PHOENIX-2301
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Julian Jaffe
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-2301.patch
>
>
> Attempting to upsert into a char array causes an NPE. Minimum example:
> {code:sql}
> 0: jdbc:phoenix:xx> CREATE TABLE IF NOT EXISTS TEST("testIntArray" 
> INTEGER[], CONSTRAINT "test_pk" PRIMARY KEY("testIntArray")) 
> DEFAULT_COLUMN_FAMILY='T';
> No rows affected (1.28 seconds)
> 0: jdbc:phoenix:xx> UPSERT INTO TEST VALUES (ARRAY[1, 2, 3]);
> 1 row affected (0.184 seconds)
> 0: jdbc:phoenix:xx> SELECT * FROM TEST;
> +--+
> |   testIntArray   |
> +--+
> | [1, 2, 3]|
> +--+
> 1 row selected (0.308 seconds)
> 0: jdbc:phoenix:xx> DROP TABLE IF EXISTS TEST;
> No rows affected (3.348 seconds)
> 0: jdbc:phoenix:xx> CREATE TABLE IF NOT EXISTS TEST("testCharArray" 
> CHAR(3)[], CONSTRAINT "test_pk" PRIMARY KEY("testCharArray")) 
> DEFAULT_COLUMN_FAMILY='T';
> No rows affected (1.446 seconds)
> 0: jdbc:phoenix:xx> UPSERT INTO TEST VALUES (ARRAY['aaa', 'bbb', 'ccc']);
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.types.PArrayDataType.createPhoenixArray(PArrayDataType.java:1123)
>   at 
> org.apache.phoenix.schema.types.PArrayDataType.toObject(PArrayDataType.java:338)
>   at 
> org.apache.phoenix.schema.types.PCharArray.toObject(PCharArray.java:64)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:967)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1008)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1004)
>   at org.apache.phoenix.util.SchemaUtil.toString(SchemaUtil.java:381)
>   at org.apache.phoenix.schema.PTableImpl.newKey(PTableImpl.java:572)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:117)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:98)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:821)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:319)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:311)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1432)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> 0: jdbc:phoenix:xx> SELECT * FROM TEST;
> +---+
> | testCharArray |
> +---+
> +---+
> No rows selected (0.169 seconds)
> 0: jdbc:phoenix:xx> SELECT "testCharArray" FROM TEST;
> +---+
> | testCharArray |
> +---+
> +---+
> No rows selected (0.182 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2291) Error: Could not find hash cache for joinId

2015-11-06 Thread Maryann Xue (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maryann Xue resolved PHOENIX-2291.
--
Resolution: Duplicate

> Error: Could not find hash cache for joinId
> ---
>
> Key: PHOENIX-2291
> URL: https://issues.apache.org/jira/browse/PHOENIX-2291
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.0
> Environment: Centos7 cluster, HBase 1.1, Phoenix 4.5.0
>Reporter: Dan Meany
>Assignee: Maryann Xue
>Priority: Minor
>
> Intermittently get error below when joining two tables (~10k rows each).   
> May be load-related.
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: �X�w��ZY. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:96)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:213)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1316)
> Query plan looks like:
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE1_IDX |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | PARALLEL LEFT-JOIN TABLE 0   |
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE2_IDX |
> | AFTER-JOIN SERVER FILTER BY "MP.:ID" IS NULL |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2291) Error: Could not find hash cache for joinId

2015-11-06 Thread Maryann Xue (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993664#comment-14993664
 ] 

Maryann Xue commented on PHOENIX-2291:
--

Thank you, [~danmeany], for the confirmation! I'll mark this one as a duplicate 
and you can watch on the other one.

> Error: Could not find hash cache for joinId
> ---
>
> Key: PHOENIX-2291
> URL: https://issues.apache.org/jira/browse/PHOENIX-2291
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.0
> Environment: Centos7 cluster, HBase 1.1, Phoenix 4.5.0
>Reporter: Dan Meany
>Assignee: Maryann Xue
>Priority: Minor
>
> Intermittently get error below when joining two tables (~10k rows each).   
> May be load-related.
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: �X�w��ZY. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:96)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:213)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1316)
> Query plan looks like:
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE1_IDX |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | PARALLEL LEFT-JOIN TABLE 0   |
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE2_IDX |
> | AFTER-JOIN SERVER FILTER BY "MP.:ID" IS NULL |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2301) NullPointerException when upserting into a char array column

2015-11-06 Thread Dumindu Buddhika (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993618#comment-14993618
 ] 

Dumindu Buddhika commented on PHOENIX-2301:
---

There is a NPE. But it is caused at,
{code}
throw new DataExceedsCapacityException(name.getString() + "." + 
column.getName().getString() + " may not exceed " + maxLength + " bytes (" + 
SchemaUtil.toString(type, byteValue) + ")");
{code}

while generating the SchemaUtil.toString() since we do not pass in maxlength 
here. But now as we are skipping this test for arrays, it wont be a problem. 

> NullPointerException when upserting into a char array column
> 
>
> Key: PHOENIX-2301
> URL: https://issues.apache.org/jira/browse/PHOENIX-2301
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.2
>Reporter: Julian Jaffe
>Assignee: Dumindu Buddhika
> Attachments: PHOENIX-2301.patch
>
>
> Attempting to upsert into a char array causes an NPE. Minimum example:
> {code:sql}
> 0: jdbc:phoenix:xx> CREATE TABLE IF NOT EXISTS TEST("testIntArray" 
> INTEGER[], CONSTRAINT "test_pk" PRIMARY KEY("testIntArray")) 
> DEFAULT_COLUMN_FAMILY='T';
> No rows affected (1.28 seconds)
> 0: jdbc:phoenix:xx> UPSERT INTO TEST VALUES (ARRAY[1, 2, 3]);
> 1 row affected (0.184 seconds)
> 0: jdbc:phoenix:xx> SELECT * FROM TEST;
> +--+
> |   testIntArray   |
> +--+
> | [1, 2, 3]|
> +--+
> 1 row selected (0.308 seconds)
> 0: jdbc:phoenix:xx> DROP TABLE IF EXISTS TEST;
> No rows affected (3.348 seconds)
> 0: jdbc:phoenix:xx> CREATE TABLE IF NOT EXISTS TEST("testCharArray" 
> CHAR(3)[], CONSTRAINT "test_pk" PRIMARY KEY("testCharArray")) 
> DEFAULT_COLUMN_FAMILY='T';
> No rows affected (1.446 seconds)
> 0: jdbc:phoenix:xx> UPSERT INTO TEST VALUES (ARRAY['aaa', 'bbb', 'ccc']);
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.schema.types.PArrayDataType.createPhoenixArray(PArrayDataType.java:1123)
>   at 
> org.apache.phoenix.schema.types.PArrayDataType.toObject(PArrayDataType.java:338)
>   at 
> org.apache.phoenix.schema.types.PCharArray.toObject(PCharArray.java:64)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:967)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1008)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1004)
>   at org.apache.phoenix.util.SchemaUtil.toString(SchemaUtil.java:381)
>   at org.apache.phoenix.schema.PTableImpl.newKey(PTableImpl.java:572)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:117)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.access$400(UpsertCompiler.java:98)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$3.execute(UpsertCompiler.java:821)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:319)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:311)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:309)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1432)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> 0: jdbc:phoenix:xx> SELECT * FROM TEST;
> +---+
> | testCharArray |
> +---+
> +---+
> No rows selected (0.169 seconds)
> 0: jdbc:phoenix:xx> SELECT "testCharArray" FROM TEST;
> +---+
> | testCharArray |
> +---+
> +---+
> No rows selected (0.182 seconds)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2291) Error: Could not find hash cache for joinId

2015-11-06 Thread Dan Meany (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14993511#comment-14993511
 ] 

Dan Meany commented on PHOENIX-2291:


Yes, its is a multi-tenant table

> Error: Could not find hash cache for joinId
> ---
>
> Key: PHOENIX-2291
> URL: https://issues.apache.org/jira/browse/PHOENIX-2291
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.5.0
> Environment: Centos7 cluster, HBase 1.1, Phoenix 4.5.0
>Reporter: Dan Meany
>Assignee: Maryann Xue
>Priority: Minor
>
> Intermittently get error below when joining two tables (~10k rows each).   
> May be load-related.
> java.lang.RuntimeException: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Could not find hash cache for 
> joinId: �X�w��ZY. The cache might have expired and have been removed.
>   at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.(HashJoinRegionScanner.java:96)
>   at 
> org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:213)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:179)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1316)
> Query plan looks like:
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE1_IDX |
> | SERVER FILTER BY FIRST KEY ONLY  |
> | PARALLEL LEFT-JOIN TABLE 0   |
> | CLIENT 40-CHUNK PARALLEL 40-WAY FULL SCAN OVER TABLE2_IDX |
> | AFTER-JOIN SERVER FILTER BY "MP.:ID" IS NULL |



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)