[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Description: 
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]   @Thomas

 

  was:
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 

[~karanmehta93] 

 


> Apache Phoenix website Grammar page is running on an very old version
> -
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]   @Thomas
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Description: 
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]  

 

  was:
For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 FYI [~karanmehta93]   @Thomas

 


> Apache Phoenix website Grammar page is running on an very old version
> -
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an old version

2018-09-22 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated PHOENIX-4918:
-
Summary: Apache Phoenix website Grammar page is running on an old version  
(was: Apache Phoenix website Grammar page is running on an very old version)

> Apache Phoenix website Grammar page is running on an old version
> 
>
> Key: PHOENIX-4918
> URL: https://issues.apache.org/jira/browse/PHOENIX-4918
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Xu Cang
>Priority: Trivial
>
> For example this query example is incorrect: CREATE TABLE my_schema.my_table 
> ( id BIGINT not null primary key, date)
>  
> [https://phoenix.apache.org/language/index.html]
> I checked the master branch and 4.x branch, the code is correct though. 
> Meaning the website is using a very old version of phoenix.csv.
> Any plan to update it? thanks.
>  FYI [~karanmehta93]  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4918) Apache Phoenix website Grammar page is running on an very old version

2018-09-22 Thread Xu Cang (JIRA)
Xu Cang created PHOENIX-4918:


 Summary: Apache Phoenix website Grammar page is running on an very 
old version
 Key: PHOENIX-4918
 URL: https://issues.apache.org/jira/browse/PHOENIX-4918
 Project: Phoenix
  Issue Type: Bug
Reporter: Xu Cang


For example this query example is incorrect: CREATE TABLE my_schema.my_table ( 
id BIGINT not null primary key, date)

 

[https://phoenix.apache.org/language/index.html]

I checked the master branch and 4.x branch, the code is correct though. Meaning 
the website is using a very old version of phoenix.csv.

Any plan to update it? thanks.

 

[~karanmehta93] 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3163) Split during global index creation may cause ERROR 201 error

2018-09-22 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-3163:
---
Comment: was deleted

(was: Also, can someone remind me why regionsplits are bad? So bad that check 
on the server whether the Scan's start/stop key match the region's beginning 
and end key...?)

> Split during global index creation may cause ERROR 201 error
> 
>
> Key: PHOENIX-3163
> URL: https://issues.apache.org/jira/browse/PHOENIX-3163
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-3163_addendum1.patch, PHOENIX-3163_v1.patch, 
> PHOENIX-3163_v3.patch, PHOENIX-3163_v4.patch, PHOENIX-3163_v5.patch, 
> PHOENIX-3163_v6.patch
>
>
> When we create global index and split happen meanwhile there is a chance to 
> fail with ERROR 201:
> {noformat}
> 2016-08-08 15:55:17,248 INFO  [Thread-6] 
> org.apache.phoenix.iterate.BaseResultIterators(878): Failed to execute task 
> during cancel
> java.util.concurrent.ExecutionException: java.sql.SQLException: ERROR 201 
> (22000): Illegal data.
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:872)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:809)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:713)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$2.execute(UpsertCompiler.java:815)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:2823)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1079)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1382)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1440)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter$1.run(TestIndexWriter.java:93)
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:441)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:287)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint$UnsignedShortCodec.decodeShort(PUnsignedSmallint.java:146)
>   at 
> org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:104)
>   at org.apache.phoenix.schema.types.PSmallint.toObject(PSmallint.java:28)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PUnsignedSmallint.toObject(PUnsignedSmallint.java:102)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:980)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:992)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:830)
>   at 
> org.apache.phoenix.schema.types.PDecimal.coerceBytes(PDecimal.java:342)
>   at 
> org.apache.phoenix.schema.types.PDataType.coerceBytes(PDataType.java:810)
>   at 
> org.apache.phoenix.expression.CoerceExpression.evaluate(CoerceExpression.java:149)
>   at 
> org.apache.phoenix.compile.ExpressionProjector.getValue(ExpressionProjector.java:69)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.getBytes(PhoenixResultSet.java:308)
>   at 
> org.apache.phoenix.compile.UpsertCompiler.upsertSelect(UpsertCompiler.java:197)
>   at 
> 

[jira] [Updated] (PHOENIX-4917) ClassCastException when projecting array elements in hash join

2018-09-22 Thread Gerald Sangudi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4917:

Description: 
This bug was introduced in the fix for PHOENIX-4791.

When projecting array elements in hash join, we now generate both 
ProjectedTupleValue and MultiKeyTupleValue. Before the fix for PHOENIX-4791, 
hash join was only generating ProjectedTupleValue, and there were two lines of 
code with class casts that reflected this assumption. The fix is to handle both 
ProjectedTupleValue and MultiKeyTupleValue, while continuing to propagate the 
array cell as in PHOENIX-4791.

 

The stack trace with the ClassCastException:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.ClassCastException: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)

  was:
This bug was introduced in the fix for PHOENIX-4791.

When projecting array elements in hash join, we now generate both 
ProjectedTupleValue and MultiKeyTupleValue. Previously, we were only generating 
ProjectedTupleValue, and there are two lines of code that contain this 
assumption in class casts. The fix is to merge into the MultiKeyTupleValue, 
while propagating the array cell as in PHOENIX-4791.

 

The stack trace with the ClassCastException:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.ClassCastException: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)


> 

[jira] [Updated] (PHOENIX-4916) When collecting statistics, the estimated size of a guide post may only count part of cells of the last row

2018-09-22 Thread Bin Shi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bin Shi updated PHOENIX-4916:
-
Description: 
In DefaultStatisticsCollector.collectStatistics(...), it iterates all cells of 
the current row, once the accumulated estimated size plus the size of the 
current cell >= guide post width, it skips all the remaining cells. The result 
is that the estimated size of a guide post may only count part of cells of the 
last row.

This problem can be ignored in clusters with real data where the guide post 
width is much bigger than the row size, but it does have impact on unit test 
and integration test, because we use very small guide post width in the test 
which results in inaccuracy of the estimated size of the query.

  was:
In DefaultStatisticsCollector.collectStatistics(...), it iterate all cells of 
the current row, once the accumulated estimated size plus the size of the 
current cell >= guide post width, it skipped all the remaining cells. The 
result is that  he estimated size of a guide post may only count part of cells 
of the last row.

This problem can be ignored in clusters with real data where the guide post 
width is much bigger than the row size, but it does have impact on unit test 
and iteration test, because we use very small guide post width in the test 
which results in inaccuracy of the estimated size of the query.


> When collecting statistics, the estimated size of a guide post may only count 
> part of cells of the last row
> ---
>
> Key: PHOENIX-4916
> URL: https://issues.apache.org/jira/browse/PHOENIX-4916
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Major
>
> In DefaultStatisticsCollector.collectStatistics(...), it iterates all cells 
> of the current row, once the accumulated estimated size plus the size of the 
> current cell >= guide post width, it skips all the remaining cells. The 
> result is that the estimated size of a guide post may only count part of 
> cells of the last row.
> This problem can be ignored in clusters with real data where the guide post 
> width is much bigger than the row size, but it does have impact on unit test 
> and integration test, because we use very small guide post width in the test 
> which results in inaccuracy of the estimated size of the query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4917) ClassCastException when projecting array elements in hash join

2018-09-22 Thread Gerald Sangudi (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gerald Sangudi updated PHOENIX-4917:

Description: 
This bug was introduced in the fix for PHOENIX-4791.

When projecting array elements in hash join, we now generate both 
ProjectedTupleValue and MultiKeyTupleValue. Previously, we were only generating 
ProjectedTupleValue, and there are two lines of code that contain this 
assumption in class casts. The fix is to merge into the MultiKeyTupleValue, 
while propagating the array cell as in PHOENIX-4791.

 

The stack trace with the ClassCastException:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)

at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)

at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)

at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.ClassCastException: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)

at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)

  was:
This bug was introduced in the fix for 
https://issues.apache.org/jira/browse/PHOENIX-4791.

When projecting array elements in hash join, we now generate both 
ProjectedTupleValue and MultiKeyTupleValue. Previously, we were only generating 
ProjectedTupleValue, and there are two lines of code that contain this 
assumption in class casts. The fix is to merge into the MultiKeyTupleValue, 
while propagating the array cell as in PHOENIX-4791.

 

The stack trace with the ClassCastException:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

 at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)

 at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)

 at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

 at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

 at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)

 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)

 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)

 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.ClassCastException: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)


> ClassCastException 

[jira] [Created] (PHOENIX-4917) ClassCastException when projecting array elements in hash join

2018-09-22 Thread Gerald Sangudi (JIRA)
Gerald Sangudi created PHOENIX-4917:
---

 Summary: ClassCastException when projecting array elements in hash 
join
 Key: PHOENIX-4917
 URL: https://issues.apache.org/jira/browse/PHOENIX-4917
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 4.14.1
Reporter: Gerald Sangudi
Assignee: Gerald Sangudi
 Fix For: 4.15.0, 4.14.1


This bug was introduced in the fix for 
https://issues.apache.org/jira/browse/PHOENIX-4791.

When projecting array elements in hash join, we now generate both 
ProjectedTupleValue and MultiKeyTupleValue. Previously, we were only generating 
ProjectedTupleValue, and there are two lines of code that contain this 
assumption in class casts. The fix is to merge into the MultiKeyTupleValue, 
while propagating the array cell as in PHOENIX-4791.

 

The stack trace with the ClassCastException:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.DoNotRetryIOException):
 org.apache.hadoop.hbase.DoNotRetryIOException: 
GENO_DOSE,,1537598769044.1a6cb8853b036c59e7515d8e876e28c5.: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

 at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)

 at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:300)

 at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

 at 
org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)

 at 
org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:294)

 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2633)

 at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2837)

 at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)

 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)

 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)

 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)

 at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

Caused by: java.lang.ClassCastException: 
org.apache.phoenix.schema.tuple.MultiKeyValueTuple cannot be cast to 
org.apache.phoenix.execute.TupleProjector$ProjectedValueTuple

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.processResults(HashJoinRegionScanner.java:220)

 at 
org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:294)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)