[jira] [Created] (PHOENIX-6960) Scan range is wrong when query desc columns

2023-05-18 Thread Yunbo Fan (Jira)
Yunbo Fan created PHOENIX-6960:
--

 Summary: Scan range is wrong when query desc columns
 Key: PHOENIX-6960
 URL: https://issues.apache.org/jira/browse/PHOENIX-6960
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Yunbo Fan


Ways to reproduce
{code}
0: jdbc:phoenix:> create table sts(id integer primary key, name varchar, type 
integer, status integer);
No rows affected (1.259 seconds)
0: jdbc:phoenix:> create index sts_name_desc on sts(status, type desc, name 
desc);
^[[ANo rows affected (6.376 seconds)
0: jdbc:phoenix:> create index sts_name_asc on sts(type desc, name) include 
(status);
No rows affected (6.377 seconds)
0: jdbc:phoenix:> upsert into sts values(1, 'test10.txt', 1, 1);
1 row affected (0.026 seconds)
0: jdbc:phoenix:>
0: jdbc:phoenix:>
0: jdbc:phoenix:> explain select * from sts where type = 1 and name like 
'test10%';
+--++---+-+
| PLAN  
   | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
+--++---+-+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_ASC 
[~1,'test10'] - [~1,'test11'] | null   | null  | null|
+--++---+-+
1 row selected (0.023 seconds)
0: jdbc:phoenix:> select * from sts where type = 1 and name like 'test10%';
+++--++
| ID |NAME| TYPE | STATUS |
+++--++
| 1  | test10.txt | 1| 1  |
+++--++
1 row selected (0.033 seconds)
0: jdbc:phoenix:> explain select * from sts where status = 1 and type = 1 and 
name like 'test10%';
+-++---+-+
|PLAN   
  | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
+-++---+-+
| CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_DESC 
[1,~1,~'test10'] - [1,~1,~'test1/'] | null   | null  | null 
   |
| SERVER FILTER BY FIRST KEY ONLY AND "NAME" LIKE 'test10%' 
  | null   | null  | null|
+-++---+-+
2 rows selected (0.022 seconds)
0: jdbc:phoenix:> select * from sts where status = 1 and type = 1 and name like 
'test10%';
++--+--++
| ID | NAME | TYPE | STATUS |
++--+--++
++--+--++
No rows selected (0.04 seconds)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6911) Upgrade to 5.1.3 from 5.1.2 failed caused by not able to reportForDuty

2023-03-19 Thread Yunbo Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunbo Fan resolved PHOENIX-6911.

Resolution: Not A Problem

> Upgrade to 5.1.3 from 5.1.2 failed caused by not able to reportForDuty 
> ---
>
> Key: PHOENIX-6911
> URL: https://issues.apache.org/jira/browse/PHOENIX-6911
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: Yunbo Fan
>Priority: Critical
>
> when upgrade to 5.1.3 from 5.1.2, regionserver can't report for duty to master
> {code}
> org.apache.hbase.thirdparty.com.google.protobuf.ServiceException: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Call to 
> hmasterxx failed on local exception: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Connection 
> closed
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:324)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:91)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
> at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:15834)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2709)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:995)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Call 
> to hmasterxx  failed on local exception: 
> org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Connection 
> closed
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:201)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:91)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.cleanupCalls(NettyRpcDuplexHandler.java:203)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelInactive(NettyRpcDuplexHandler.java:211)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:221)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:221)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
> at 
> org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
> at 
> org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6911) Upgrade to 5.1.3 from 5.1.2 failed caused by not able to reportForDuty

2023-03-15 Thread Yunbo Fan (Jira)
Yunbo Fan created PHOENIX-6911:
--

 Summary: Upgrade to 5.1.3 from 5.1.2 failed caused by not able to 
reportForDuty 
 Key: PHOENIX-6911
 URL: https://issues.apache.org/jira/browse/PHOENIX-6911
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.3
Reporter: Yunbo Fan


when upgrade to 5.1.3 from 5.1.2, regionserver can't report for duty to master
{code}
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException: 
org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Call to 
hmasterxx failed on local exception: 
org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Connection closed
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:324)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:91)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:571)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.regionServerStartup(RegionServerStatusProtos.java:15834)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2709)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:995)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Call 
to hmasterxx  failed on local exception: 
org.apache.hadoop.hbase.exceptions.ConnectionClosedException: Connection closed
at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:201)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:378)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:91)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:409)
at 
org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:405)
at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:117)
at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:132)
at 
org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.cleanupCalls(NettyRpcDuplexHandler.java:203)
at 
org.apache.hadoop.hbase.ipc.NettyRpcDuplexHandler.channelInactive(NettyRpcDuplexHandler.java:211)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:221)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:390)
at 
org.apache.hbase.thirdparty.io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:355)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.fireChannelInactive(AbstractChannelHandlerContext.java:221)
at 
org.apache.hbase.thirdparty.io.netty.channel.ChannelInboundHandlerAdapter.channelInactive(ChannelInboundHandlerAdapter.java:75)
at 
org.apache.hbase.thirdparty.io.netty.handler.timeout.IdleStateHandler.channelInactive(IdleStateHandler.java:277)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:242)
at 
org.apache.hbase.thirdparty.io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:228)
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6897) Aggregate on unverified index rows return wrong result

2023-03-07 Thread Yunbo Fan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yunbo Fan updated PHOENIX-6897:
---
Description: 
h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index
{code}
create table students(id integer primary key, name varchar, status integer);
create index students_name_index on students(name, id) include (status);
{code}
2. upsert data using phoenix
{code}
upsert into students values(1, 'tom', 1);
upsert into students values(2, 'jerry', 2);
{code}
3. do phase1 by hbase shell, change status column value to '2' and verified 
column value to '2'
{code}
put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:0:STATUS', 
"\x80\x00\x00\x02"
put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:_0', "\x02"
{code}
notice: hbase shell can't parse colon in column, like '0:0:STATUS', you may 
need comment the line in hbase/lib/ruby/hbase/table.rb, see 
https://issues.apache.org/jira/browse/HBASE-13788
{code}
# Returns family and (when has it) qualifier for a column name
def parse_column_name(column)
  split = org.apache.hadoop.hbase.KeyValue.parseColumn(column.to_java_bytes)
  -> comment this line out #set_converter(split) if split.length > 1
  return split[0], (split.length > 1) ? split[1] : nil
end
{code}
4. do query without aggregate, the result is right
{code}
0: jdbc:phoenix:> select status from students where name = 'tom';
++
| STATUS |
++
| 1  |
++
{code}
5. do query with aggregate, get wrong result
{code}
0: jdbc:phoenix:> select count(*) from students where name = 'tom' and status = 
1;
+--+
| COUNT(1) |
+--+
| 0|
+--+
{code}
6. using NO_INDEX hint
{code}
0: jdbc:phoenix:> select /*+ NO_INDEX */ count(*) from students where name = 
'tom' and status = 1;
+--+
| COUNT(1) |
+--+
| 1|
+--+
{code}

  was:
h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index


> Aggregate on unverified index rows return wrong result
> --
>
> Key: PHOENIX-6897
> URL: https://issues.apache.org/jira/browse/PHOENIX-6897
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.2
>Reporter: Yunbo Fan
>Priority: Major
>
> h4. Summary:
> Upsert include three phases, and if failed after phase1, unverified index 
> rows will leave in the index table. This will cause wrong result when do 
> aggregate queries.
> h4. Steps for reproduce
> 1. create table and index
> {code}
> create table students(id integer primary key, name varchar, status integer);
> create index students_name_index on students(name, id) include (status);
> {code}
> 2. upsert data using phoenix
> {code}
> upsert into students values(1, 'tom', 1);
> upsert into students values(2, 'jerry', 2);
> {code}
> 3. do phase1 by hbase shell, change status column value to '2' and verified 
> column value to '2'
> {code}
> put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:0:STATUS', 
> "\x80\x00\x00\x02"
> put 'STUDENTS_NAME_INDEX', "tom\x00\x80\x00\x00\x01", '0:_0', "\x02"
> {code}
> notice: hbase shell can't parse colon in column, like '0:0:STATUS', you may 
> need comment the line in hbase/lib/ruby/hbase/table.rb, see 
> https://issues.apache.org/jira/browse/HBASE-13788
> {code}
> # Returns family and (when has it) qualifier for a column name
> def parse_column_name(column)
>   split = 
> org.apache.hadoop.hbase.KeyValue.parseColumn(column.to_java_bytes)
>   -> comment this line out #set_converter(split) if split.length > 1
>   return split[0], (split.length > 1) ? split[1] : nil
> end
> {code}
> 4. do query without aggregate, the result is right
> {code}
> 0: jdbc:phoenix:> select status from students where name = 'tom';
> ++
> | STATUS |
> ++
> | 1  |
> ++
> {code}
> 5. do query with aggregate, get wrong result
> {code}
> 0: jdbc:phoenix:> select count(*) from students where name = 'tom' and status 
> = 1;
> +--+
> | COUNT(1) |
> +--+
> | 0|
> +--+
> {code}
> 6. using NO_INDEX hint
> {code}
> 0: jdbc:phoenix:> select /*+ NO_INDEX */ count(*) from students where name = 
> 'tom' and status = 1;
> +--+
> | COUNT(1) |
> +--+
> | 1|
> +--+
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6897) Aggregate on unverified index rows return wrong result

2023-03-07 Thread Yunbo Fan (Jira)
Yunbo Fan created PHOENIX-6897:
--

 Summary: Aggregate on unverified index rows return wrong result
 Key: PHOENIX-6897
 URL: https://issues.apache.org/jira/browse/PHOENIX-6897
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.2
Reporter: Yunbo Fan


h4. Summary:
Upsert include three phases, and if failed after phase1, unverified index rows 
will leave in the index table. This will cause wrong result when do aggregate 
queries.
h4. Steps for reproduce
1. create table and index



--
This message was sent by Atlassian Jira
(v8.20.10#820010)