[jira] [Assigned] (PHOENIX-7343) Support for complex types in CDC

2024-08-29 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reassigned PHOENIX-7343:
--

Assignee: Hari Krishna Dara

> Support for complex types in CDC
> 
>
> Key: PHOENIX-7343
> URL: https://issues.apache.org/jira/browse/PHOENIX-7343
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Major
>
> Support for the two complex types, viz., ARRAY and JSON need to be added for 
> CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7384) Null not handled in prepareDataTableScan of CDCGlobalIndexRegionScanner

2024-08-14 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reassigned PHOENIX-7384:
--

Assignee: Hari Krishna Dara

> Null not handled in prepareDataTableScan of CDCGlobalIndexRegionScanner
> ---
>
> Key: PHOENIX-7384
> URL: https://issues.apache.org/jira/browse/PHOENIX-7384
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Saurabh Rai
>Assignee: Hari Krishna Dara
>Priority: Major
>
> Null not handled in prepareDataTableScan of CDCGlobalIndexRegionScanner
> {quote}Caused by: java.lang.NullPointerException at 
> org.apache.phoenix.util.CDCUtil.setupScanForCDC(CDCUtil.java:98) at 
> org.apache.phoenix.coprocessor.CDCGlobalIndexRegionScanner.prepareDataTableScan(CDCGlobalIndexRegionScanner.java:99)
>  at 
> org.apache.phoenix.coprocessor.UncoveredGlobalIndexRegionScanner.scanDataRows(UncoveredGlobalIndexRegionScanner.java:134)
>  at 
> org.apache.phoenix.coprocessor.UncoveredGlobalIndexRegionScanner$1.call(UncoveredGlobalIndexRegionScanner.java:177)
>  at 
> org.apache.phoenix.coprocessor.UncoveredGlobalIndexRegionScanner$1.call(UncoveredGlobalIndexRegionScanner.java:166)
>  at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:131)
>  at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:74)
>  at 
> org.apache.phoenix.thirdparty.com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:82)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:750){quote}
> Null is being returned from this method - 
> https://github.com/apache/phoenix/blob/f1b0102301c06390c51716bebffc6ebd2eda7b19/phoenix-core-server/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java#L215



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7350) Update documentation

2024-06-30 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7350:
---
Attachment: (was: cdc-docs.patch)

> Update documentation
> 
>
> Key: PHOENIX-7350
> URL: https://issues.apache.org/jira/browse/PHOENIX-7350
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Major
> Attachments: cdc-docs.patch
>
>
> Update the site pages for documentation on CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7350) Update documentation

2024-06-30 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7350:
---
Attachment: cdc-docs.patch

> Update documentation
> 
>
> Key: PHOENIX-7350
> URL: https://issues.apache.org/jira/browse/PHOENIX-7350
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Major
> Attachments: cdc-docs.patch
>
>
> Update the site pages for documentation on CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7350) Update documentation

2024-06-30 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7350:
---
Attachment: cdc-docs.patch

> Update documentation
> 
>
> Key: PHOENIX-7350
> URL: https://issues.apache.org/jira/browse/PHOENIX-7350
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Major
> Attachments: cdc-docs.patch
>
>
> Update the site pages for documentation on CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7350) Update documentation

2024-06-30 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reassigned PHOENIX-7350:
--

Assignee: Hari Krishna Dara

> Update documentation
> 
>
> Key: PHOENIX-7350
> URL: https://issues.apache.org/jira/browse/PHOENIX-7350
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Major
>
> Update the site pages for documentation on CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7350) Update documentation

2024-06-30 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7350:
--

 Summary: Update documentation
 Key: PHOENIX-7350
 URL: https://issues.apache.org/jira/browse/PHOENIX-7350
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


Update the site pages for documentation on CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7349) Improve the error messaging when CDC index is not yet active

2024-06-28 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7349:
--

 Summary: Improve the error messaging when CDC index is not yet 
active
 Key: PHOENIX-7349
 URL: https://issues.apache.org/jira/browse/PHOENIX-7349
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


A query against CDC with index in building state, you get the below cryptic 
error:
{quote}Error: ERROR 2014 (INT16): Row Value Constructor Offset Not Coercible to 
a Primary or Indexed RowKey. No table or index could be coerced to the PK as 
the offset. Or an uncovered index was attempted (state=INT16,code=2014)
{quote}
This situation doesn't happen for regular queries because such indexes get 
silently dropped.

 

We need to ensure a more meaningful message in this case.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7348) Default INCLUDE scopes given in CREATE CDC are not getting recognized

2024-06-28 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7348:
---
Summary: Default INCLUDE scopes given in CREATE CDC are not getting 
recognized  (was: Default INCLUDE scopes gives in CREATE CDC are not getting 
recognized)

> Default INCLUDE scopes given in CREATE CDC are not getting recognized
> -
>
> Key: PHOENIX-7348
> URL: https://issues.apache.org/jira/browse/PHOENIX-7348
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Hari Krishna Dara
>Assignee: Hari Krishna Dara
>Priority: Minor
>
> The CREATE CDC statement allows specifying a default for the change image 
> scopes which should get used when there is no query hint, but this value is 
> not getting used. There is also no test to catch this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7348) Default INCLUDE scopes gives in CREATE CDC are not getting recognized

2024-06-27 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7348:
--

 Summary: Default INCLUDE scopes gives in CREATE CDC are not 
getting recognized
 Key: PHOENIX-7348
 URL: https://issues.apache.org/jira/browse/PHOENIX-7348
 Project: Phoenix
  Issue Type: Bug
Reporter: Hari Krishna Dara
Assignee: Hari Krishna Dara


The CREATE CDC statement allows specifying a default for the change image 
scopes which should get used when there is no query hint, but this value is not 
getting used. There is also no test to catch this issue.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7015) Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase

2024-06-24 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7015.

Resolution: Fixed

> Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase
> ---
>
> Key: PHOENIX-7015
> URL: https://issues.apache.org/jira/browse/PHOENIX-7015
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Priority: Major
>
> For CDC region scanner usecase, extend UncoveredGlobalIndexRegionScanner to 
> CDCUncoveredGlobalIndexRegionScanner. The new region scanner for CDC performs 
> raw scan to index table and retrieve data table rows from index rows.
> Using the time range, it can form a JSON blob to represent changes to the row 
> including pre and/or post row images.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7345) Support for alternative indexing scheme for CDC

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7345:
--

 Summary: Support for alternative indexing scheme for CDC
 Key: PHOENIX-7345
 URL: https://issues.apache.org/jira/browse/PHOENIX-7345
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


When a CDC table is created, an indexis created on the PHOENIX_ROW_TIMESTMAP(), 
which makes it possible to run range scans efficiently on the change timestamp. 
Since indexes always include the PK columns of the data table, additional 
filtering on the data table PK columns can also be done efficiently. However, a 
use case may require filtering based on a specific order of columns that 
includes both data and PK columns, so having support for customizing the PK for 
the CDC index will be beneficial.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7344) Support for Dynamic Columns

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7344:
--

 Summary: Support for Dynamic Columns
 Key: PHOENIX-7344
 URL: https://issues.apache.org/jira/browse/PHOENIX-7344
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


CDC recognizes changes for only those columns with static metadata, which means 
Dynamic Columns are completely ignored. We need to extend the functionality 
such that the SELECT queries on CDC objects to also support Dynamic Columns.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7343) Support for complex types in CDC

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7343:
--

 Summary: Support for complex types in CDC
 Key: PHOENIX-7343
 URL: https://issues.apache.org/jira/browse/PHOENIX-7343
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


Support for the two complex types, viz., ARRAY and JSON need to be added for 
CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7342) Optimize data table scan range based on the startRow/endRow from Scan

2024-06-24 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7342:
---
Description: When a time range is specified in a SELECT query on CDC, it is 
possible to optimize the scan on data table by setting the time range.  (was: 
Currently CDC can be created to use an UNCOVERED global index, but it should be 
possible to make use of a LOCAL index as well. )
Summary: Optimize data table scan range based on the startRow/endRow 
from Scan  (was: Support for using a local index type)

> Optimize data table scan range based on the startRow/endRow from Scan
> -
>
> Key: PHOENIX-7342
> URL: https://issues.apache.org/jira/browse/PHOENIX-7342
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Priority: Minor
>
> When a time range is specified in a SELECT query on CDC, it is possible to 
> optimize the scan on data table by setting the time range.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7342) Support for using a local index type

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7342:
--

 Summary: Support for using a local index type
 Key: PHOENIX-7342
 URL: https://issues.apache.org/jira/browse/PHOENIX-7342
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


Currently CDC can be created to use an UNCOVERED global index, but it should be 
possible to make use of a LOCAL index as well. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7001) Change Data Capture leveraging Max Lookback and Uncovered Indexes

2024-06-18 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7001.

Release Note: 
Change Data Capture (CDC) is a feature designed to capture changes to tables or 
updatable views in near real-time. This new functionality supports various use 
cases, including:
* Real-Time Change Retrieval: Capture and retrieve changes as they happen or 
with minimal delay.
* Flexible Time Range Queries: Perform queries based on specific time ranges, 
typically short periods such as the last few minutes, hours, or the last few 
days.
* Comprehensive Change Tracking: Track all types of changes including 
insertions, updates, and deletions. Note that CDC does not differentiate 
between inserts and updates due to Phoenix’s handling of new versus existing 
rows.

Key features of the CDC include:
* Ordered Change Delivery: Changes are delivered in the order they arrive, 
ensuring the sequence of events is maintained.
* Streamlined Integration: Changes can be visualized and delivered to 
applications similarly to how Phoenix query results are retrieved, but with 
enhancements to support multiple results for each row and inclusion of deleted 
rows.
* Detailed Change Information: Optionally capture pre and post-change images of 
rows to provide a complete picture of modifications.

This enhancement empowers applications to maintain an accurate and timely 
reflection of database changes, supporting a wide array of real-time data 
processing and monitoring scenarios.
  Resolution: Fixed

> Change Data Capture leveraging Max Lookback and Uncovered Indexes
> -
>
> Key: PHOENIX-7001
> URL: https://issues.apache.org/jira/browse/PHOENIX-7001
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
>
> The use cases for a Change Data Capture (CDC) feature are centered around 
> capturing changes to a given table (or updatable view) as these changes 
> happen in near real-time. A CDC application can retrieve changes in real-time 
> or with some delay, or even retrieves the same set of changes multiple times. 
> This means the CDC use case can be generalized as time range queries where 
> the time range is typically short such as last x minutes or hours or 
> expressed as a specific time range in the last n days where n is typically 
> less than 7.
> A change is an update in a row. That is, a change is either updating one or 
> more columns of a table for a given row or deleting a row. It is desirable to 
> provide these changes in the order of their arrival. One can visualize the 
> delivery of these changes through a stream from a Phoenix table to the 
> application that is initiated by the application similar to the delivery of 
> any other Phoenix query results. The difference is that a regular query 
> result includes at most one result row for each row satisfying the query and 
> the deleted rows are not visible to the query result while the CDC 
> stream/result can include multiple result rows for each row and the result 
> includes deleted rows. Some use cases need to also get the pre and/or post 
> image of the row along with a change on the row. 
> The design proposed here leverages Phoenix Max Lookback and Uncovered Global 
> Indexes. The max lookback feature retains recent changes to a table, that is, 
> the changes that have been done in the last x days typically. This means that 
> the max lookback feature already captures the changes to a given table. 
> Currently, the max lookback age is configurable at the cluster level. We need 
> to extend this capability to be able to configure the max lookback age at the 
> table level so that each table can have a different max lookback age based on 
> its CDC application requirements.
> To deliver the changes in the order of their arrival, we need a time based 
> index. This index should be uncovered as the changes are already retained in 
> the table by the max lookback feature. The arrival time will be defined as 
> the mutation timestamp generated by the server. An uncovered index would 
> allow us to efficiently and orderly access to the changes. Changes to an 
> index table are also preserved by the max lookback feature.
> A CDC feature can be composed of the following components:
>  * {*}CDCUncoveredIndexRegionScanner{*}: This is a server side scanner on an 
> uncovered index used for CDC. This can inherit UncoveredIndexRegionScanner. 
> It goes through index table rows using a raw scan to identify data table rows 
> and retrieves these rows using a raw scan. Using the time range, it forms a 
> JSON blob to represent changes to the row including pre and/or post row 
> images.
>  * {*}CDC Query Compiler{*}: This is a client side component. It prepares the 
> sca

[jira] [Created] (PHOENIX-7239) When an uncovered index has different number of salt buckets than the data table, query returns no data

2024-02-26 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7239:
--

 Summary: When an uncovered index has different number of salt 
buckets than the data table, query returns no data
 Key: PHOENIX-7239
 URL: https://issues.apache.org/jira/browse/PHOENIX-7239
 Project: Phoenix
  Issue Type: Bug
 Environment: When you use a salt bucketing value for index that is 
different from that of data table, you get no results. As can be seen from 
below examples, when using index with buckets of 4 (same as the buckets in data 
table), there were results, but when it was 1 or 2, there were none.

 

{{0: jdbc:phoenix:localhost> create table tsalt (k INTEGER PRIMARY KEY, v1 
INTEGER) SALT_BUCKETS=4;}}
{{0: jdbc:phoenix:localhost> upsert into tsalt (k, v1) VALUES (1, 100);}}
{{0: jdbc:phoenix:localhost> create uncovered index tsaltidx on tsalt 
(PHOENIX_ROW_TIMESTAMP());}}
{{select /*+ INDEX(TSALT TSALTIDX) */ * from TSALT;}}
{{+---++}}
{{| K | V1 |}}
{{+---++}}
{{+---++}}
{{No rows selected (0.059 seconds)}}
{{0: jdbc:phoenix:localhost> create uncovered index tsaltidx4 on tsalt 
(PHOENIX_ROW_TIMESTAMP());}}
{{1 row affected (6.175 seconds)}}
{{0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX4) */ * from 
TSALT;}}
{{+---+-+}}
{{| K | V1  |}}
{{+---+-+}}
{{| 1 | 100 |}}
{{+---+-+}}
{{1 row selected (0.035 seconds)}}
{{0: jdbc:phoenix:localhost> create uncovered index tsaltidx on tsalt2 
(PHOENIX_ROW_TIMESTAMP()) salt_buckets=2;}}
{{0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX2) */ * from 
TSALT;}}
{{+---++}}
{{| K | V1 |}}
{{+---++}}
{{+---++}}
{{No rows selected (0.059 seconds)}}
Reporter: Hari Krishna Dara






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7238) Queries that use an uncovered index with SALT_BUCKETS=0, we get /0 error

2024-02-26 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7238:
---
Summary: Queries that use an uncovered index with SALT_BUCKETS=0, we get /0 
error  (was: Zero is accepted for SALT_BUCKETS, but queries fail)

> Queries that use an uncovered index with SALT_BUCKETS=0, we get /0 error
> 
>
> Key: PHOENIX-7238
> URL: https://issues.apache.org/jira/browse/PHOENIX-7238
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Hari Krishna Dara
>Priority: Minor
>
> I have not done extensive testing on it, but when I specified 
> {{SALT_BUCKETS=0}} while creating an index, I get no error and this is a 
> valid use case to disable salting on index when the data table is salted:
> {{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
> {{upsert into tsalt (k, v1) VALUES (1, 100);}}
> {{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
> SALT_BUCKETS=0;}}
>  
> From schema and hbase regions, it is correctly treated as no salting scenario.
>  
> {{0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
> table_name = 'TSALTIDX' and salt_buckets is not null;}}
> +--+
> | SALT_BUCKETS |
> +--+
> +--+
> No rows selected (0.026 seconds)
>  
> {{hbase:001:0> list_regions 'TSALTIDX'}}
> {{                   SERVER_NAME |                                            
>    REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |}}
> {{ - | 
> - | -- | 
> -- | - | - | -- |}}
> {{ localhost,16020,1708958003582 | 
> TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |      
>       |     0 |     3 |        0.0 |}}
> {{ 1 rows}}
> However, when I query through the index, I get an {{ArithmeticError}} for 
> divide by zero.
> {}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
> TSALT;{}}}{{{}Caused by: java.lang.ArithmeticException: / by zero{}}}
> {{        at 
> org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)}}
> {{        at 
> org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)}}
> {{        at 
> org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)}}
> {{        at 
> org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)}}
> {{        at 
> org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)}}
> {{        at 
> org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)}}
> {{        at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257)}}
> My suspicion is that table cells have number buckets stored as zero, so 
> PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
> and this is causing the divide by 0 error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7238) Zero is accepted for SALT_BUCKETS, but queries fail

2024-02-26 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7238:
---
Description: 
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

 

>From schema and hbase regions, it is correctly treated as no salting scenario.

 

{{0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null;}}

+--+
| SALT_BUCKETS |
+--+
+--+
No rows selected (0.026 seconds)

 

{{hbase:001:0> list_regions 'TSALTIDX'}}
{{                   SERVER_NAME |                                              
 REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |}}
{{ - | 
- | -- | 
-- | - | - | -- |}}
{{ localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |}}
{{ 1 rows}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{{{}Caused by: java.lang.ArithmeticException: / by zero{}}}
{{        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)}}
{{        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)}}
{{        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)}}
{{        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257)}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
and this is causing the divide by 0 error.

  was:
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

 

>From schema and hbase regions, it is correctly treated as no salting scenario.

 

{{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null; +-{-}{-}+ 
|SALT_BUCKETS| +-+ 
\{+}{}}}{{{}--\{+}{}}}{{{}hbase:001:0> list_regions 'TSALTIDX'{}}}
{{                   SERVER_NAME |                                              
 REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |}}
{{ - | 
- | -- | 
-- | - | - | -- |}}
{{ localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |}}
{{ 1 rows}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{\{{}Caused by: java.lang.ArithmeticException: / by zero}}
{{        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)}}
{{        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)}}
{{        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)}}
{{        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257)}}

My suspicion is that ta

[jira] [Updated] (PHOENIX-7238) Zero is accepted for SALT_BUCKETS, but queries fail

2024-02-26 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7238:
---
Description: 
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

 

>From schema and hbase regions, it is correctly treated as no salting scenario.

 

{{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null; +-{-}{-}+ 
|SALT_BUCKETS| +-+ 
\{+}{}}}{{{}--\{+}{}}}{{{}hbase:001:0> list_regions 'TSALTIDX'{}}}
{{                   SERVER_NAME |                                              
 REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |}}
{{ - | 
- | -- | 
-- | - | - | -- |}}
{{ localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |}}
{{ 1 rows}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{\{{}Caused by: java.lang.ArithmeticException: / by zero}}
{{        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)}}
{{        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)}}
{{        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)}}
{{        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)}}
{{        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257)}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
and this is causing the divide by 0 error.

  was:
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

 

>From schema and hbase regions, it is correctly treated as no salting scenario.

 

{\{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null;
+--+
|SALT_BUCKETS|

+--+
{+}--{+}{}}}\{{{}hbase:001:0> list_regions 'TSALTIDX'
                   SERVER_NAME |                                               
REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |
 - | 
- | -- | 
-- | - | - | -- |
 localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |
 1 rows{}}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{{{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{\{{}Caused by: java.lang.ArithmeticException: / by zero
        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)
        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)
        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257){}}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the in

[jira] [Updated] (PHOENIX-7238) Zero is accepted for SALT_BUCKETS, but queries fail

2024-02-26 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7238:
---
Description: 
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

 

>From schema and hbase regions, it is correctly treated as no salting scenario.

 

{\{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null;
+--+
|SALT_BUCKETS|

+--+
{+}--{+}{}}}\{{{}hbase:001:0> list_regions 'TSALTIDX'
                   SERVER_NAME |                                               
REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |
 - | 
- | -- | 
-- | - | - | -- |
 localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |
 1 rows{}}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{{{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{\{{}Caused by: java.lang.ArithmeticException: / by zero
        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)
        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)
        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257){}}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
and this is causing the divide by 0 error.

  was:
I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

>From schema and hbase regions, it appears to be treated like no salting.

{{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null;
+--+
| SALT_BUCKETS |
+--+
+--+{}}}{{{}hbase:001:0> list_regions 'TSALTIDX'
                   SERVER_NAME |                                               
REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |
 - | 
- | -- | 
-- | - | - | -- |
 localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |
 1 rows{}}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{{{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{{{}Caused by: java.lang.ArithmeticException: / by zero
        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)
        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)
        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257){}}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
and this is causi

[jira] [Created] (PHOENIX-7238) Zero is accepted for SALT_BUCKETS, but queries fail

2024-02-26 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7238:
--

 Summary: Zero is accepted for SALT_BUCKETS, but queries fail
 Key: PHOENIX-7238
 URL: https://issues.apache.org/jira/browse/PHOENIX-7238
 Project: Phoenix
  Issue Type: Bug
Reporter: Hari Krishna Dara


I have not done extensive testing on it, but when I specified 
{{SALT_BUCKETS=0}} while creating an index, I get no error and this is a valid 
use case to disable salting on index when the data table is salted:

{{create table tsalt (k INTEGER PRIMARY KEY, v1 INTEGER) SALT_BUCKETS=4;}}
{{upsert into tsalt (k, v1) VALUES (1, 100);}}
{{create uncovered index tsaltidx on tsalt (PHOENIX_ROW_TIMESTAMP()) 
SALT_BUCKETS=0;}}

>From schema and hbase regions, it appears to be treated like no salting.

{{{}0: jdbc:phoenix:localhost> select salt_buckets from system.catalog where 
table_name = 'TSALTIDX' and salt_buckets is not null;
+--+
| SALT_BUCKETS |
+--+
+--+{}}}{{{}hbase:001:0> list_regions 'TSALTIDX'
                   SERVER_NAME |                                               
REGION_NAME |  START_KEY |    END_KEY |  SIZE |   REQ |   LOCALITY |
 - | 
- | -- | 
-- | - | - | -- |
 localhost,16020,1708958003582 | 
TSALTIDX,,1708958225506.a72b20c15cecba23289a03cd6956ec15. |            |        
    |     0 |     3 |        0.0 |
 1 rows{}}}

However, when I query through the index, I get an {{ArithmeticError}} for 
divide by zero.

{{{}0: jdbc:phoenix:localhost> select /*+ INDEX(TSALT TSALTIDX) */ * from 
TSALT;{}}}{{{}Caused by: java.lang.ArithmeticException: / by zero
        at 
org.apache.phoenix.schema.SaltingUtil.getSaltingByte(SaltingUtil.java:79)
        at 
org.apache.phoenix.index.IndexMaintainer.buildDataRowKey(IndexMaintainer.java:916)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:253)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.scanIndexTableRows(UncoveredIndexRegionScanner.java:274)
        at 
org.apache.phoenix.coprocessor.UncoveredIndexRegionScanner.next(UncoveredIndexRegionScanner.java:382)
        at 
org.apache.phoenix.coprocessor.BaseRegionScanner.nextRaw(BaseRegionScanner.java:56)
        at 
org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:257){}}}

My suspicion is that table cells have number buckets stored as zero, so 
PTableImpl for the index gets constructed to return 0 from {{getBucketNum()}} 
and this is causing the divide by 0 error.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7013) CDC DQL Select query parser

2024-01-01 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7013.

Resolution: Fixed

PR: [https://github.com/apache/phoenix/pull/1766]

Change has been merged into the feature branch.

> CDC DQL Select query parser
> ---
>
> Key: PHOENIX-7013
> URL: https://issues.apache.org/jira/browse/PHOENIX-7013
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Hari Krishna Dara
>Priority: Major
>
> The purpose of this sub-task is to provide DQL query capability for CDC 
> (Change Data Capture) feature.
> The SELECT query parser can identify the given CDC table based on the table 
> type defined in SYSTEM.CATALOG and it should be able to parse qualifiers (pre 
> | post | latest | all) from the query.
> CDC DQL query sample:
>  
> {code:java}
> Select * from  where PHOENIX_ROW_TIMESTAMP() >= TO_DATE( …) 
> AND PHOENIX_ROW_TIMESTAMP() < TO_DATE( …)
> {code}
> This query would return the rows of the CDC table. The above select query can 
> be hinted at by using a new CDC hint to return just the actual change, pre, 
> post, or latest image of the row, or a combination of them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7013) CDC DQL Select query parser

2024-01-01 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reassigned PHOENIX-7013:
--

Assignee: Hari Krishna Dara

> CDC DQL Select query parser
> ---
>
> Key: PHOENIX-7013
> URL: https://issues.apache.org/jira/browse/PHOENIX-7013
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Hari Krishna Dara
>Priority: Major
>
> The purpose of this sub-task is to provide DQL query capability for CDC 
> (Change Data Capture) feature.
> The SELECT query parser can identify the given CDC table based on the table 
> type defined in SYSTEM.CATALOG and it should be able to parse qualifiers (pre 
> | post | latest | all) from the query.
> CDC DQL query sample:
>  
> {code:java}
> Select * from  where PHOENIX_ROW_TIMESTAMP() >= TO_DATE( …) 
> AND PHOENIX_ROW_TIMESTAMP() < TO_DATE( …)
> {code}
> This query would return the rows of the CDC table. The above select query can 
> be hinted at by using a new CDC hint to return just the actual change, pre, 
> post, or latest image of the row, or a combination of them.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7014) CDC query complier and optimizer

2024-01-01 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7014.

Resolution: Fixed

PR: [https://github.com/apache/phoenix/pull/1766]

Merged into the feature branch.

> CDC query complier and optimizer
> 
>
> Key: PHOENIX-7014
> URL: https://issues.apache.org/jira/browse/PHOENIX-7014
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Hari Krishna Dara
>Priority: Major
>
> For CDC table type, the query optimizer should be able to query from the 
> uncovered global index table with data table associated with the given CDC 
> table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-7001) Change Data Capture leveraging Max Lookback and Uncovered Indexes

2024-01-01 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reopened PHOENIX-7001:


Resolved wrong item.

> Change Data Capture leveraging Max Lookback and Uncovered Indexes
> -
>
> Key: PHOENIX-7001
> URL: https://issues.apache.org/jira/browse/PHOENIX-7001
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
>
> The use cases for a Change Data Capture (CDC) feature are centered around 
> capturing changes to a given table (or updatable view) as these changes 
> happen in near real-time. A CDC application can retrieve changes in real-time 
> or with some delay, or even retrieves the same set of changes multiple times. 
> This means the CDC use case can be generalized as time range queries where 
> the time range is typically short such as last x minutes or hours or 
> expressed as a specific time range in the last n days where n is typically 
> less than 7.
> A change is an update in a row. That is, a change is either updating one or 
> more columns of a table for a given row or deleting a row. It is desirable to 
> provide these changes in the order of their arrival. One can visualize the 
> delivery of these changes through a stream from a Phoenix table to the 
> application that is initiated by the application similar to the delivery of 
> any other Phoenix query results. The difference is that a regular query 
> result includes at most one result row for each row satisfying the query and 
> the deleted rows are not visible to the query result while the CDC 
> stream/result can include multiple result rows for each row and the result 
> includes deleted rows. Some use cases need to also get the pre and/or post 
> image of the row along with a change on the row. 
> The design proposed here leverages Phoenix Max Lookback and Uncovered (Global 
> or Local) Indexes. The max lookback feature retains recent changes to a 
> table, that is, the changes that have been done in the last x days typically. 
> This means that the max lookback feature already captures the changes to a 
> given table. Currently, the max lookback age is configurable at the cluster 
> level. We need to extend this capability to be able to configure the max 
> lookback age at the table level so that each table can have a different max 
> lookback age based on its CDC application requirements.
> To deliver the changes in the order of their arrival, we need a time based 
> index. This index should be uncovered as the changes are already retained in 
> the table by the max lookback feature. The arrival time can be defined as the 
> mutation timestamp generated by the server, or a user-specified timestamp (or 
> any other long integer) column. An uncovered index would allow us to 
> efficiently and orderly access to the changes. Changes to an index table are 
> also preserved by the max lookback feature.
> A CDC feature can be composed of the following components:
>  * {*}CDCUncoveredIndexRegionScanner{*}: This is a server side scanner on an 
> uncovered index used for CDC. This can inherit UncoveredIndexRegionScanner. 
> It goes through index table rows using a raw scan to identify data table rows 
> and retrieves these rows using a raw scan. Using the time range, it forms a 
> JSON blob to represent changes to the row including pre and/or post row 
> images.
>  * {*}CDC Query Compiler{*}: This is a client side component. It prepares the 
> scan object based on the given CDC query statement. 
>  * {*}CDC DDL Compiler{*}: This is a client side component. It creates the 
> time based uncovered (global/local) index based on the given CDC DDL 
> statement and a virtual table of CDC type. CDC will be a new table type. 
> A CDC DDL syntax to create CDC on a (data) table can be as follows: 
> Create CDC  on  (PHOENIX_ROW_TIMESTAMP()  | 
> ) INCLUDE (pre | post | latest | all) TTL =  seconds> INDEX =  SALT_BUCKETS=
> The above CDC DDL creates a virtual CDC table and an uncovered index. The CDC 
> table PK columns start with the timestamp or user defined column and continue 
> with the data table PK columns. The CDC table includes one non-PK column 
> which is a JSON column. The change is expressed in this JSON column in 
> multiple ways based on the CDC DDL or query statement. The change can be 
> expressed as just the mutation for the change, the latest image of the row, 
> the pre image of the row (the image before the change), the post image, or 
> any combination of these. The CDC table is not a physical table on disk. It 
> is just a virtual table to be used in a CDC query. Phoenix stores just the 
> metadata for this virtual table. 
> A CDC query can be as follow:
> Select * from  where PHOENIX_ROW_TIMESTAMP() >= TO_DATE( …) 
> AND PHOENIX_ROW_

[jira] [Resolved] (PHOENIX-7001) Change Data Capture leveraging Max Lookback and Uncovered Indexes

2024-01-01 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7001.

Resolution: Fixed

Change merged into the feature branch.

> Change Data Capture leveraging Max Lookback and Uncovered Indexes
> -
>
> Key: PHOENIX-7001
> URL: https://issues.apache.org/jira/browse/PHOENIX-7001
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
>
> The use cases for a Change Data Capture (CDC) feature are centered around 
> capturing changes to a given table (or updatable view) as these changes 
> happen in near real-time. A CDC application can retrieve changes in real-time 
> or with some delay, or even retrieves the same set of changes multiple times. 
> This means the CDC use case can be generalized as time range queries where 
> the time range is typically short such as last x minutes or hours or 
> expressed as a specific time range in the last n days where n is typically 
> less than 7.
> A change is an update in a row. That is, a change is either updating one or 
> more columns of a table for a given row or deleting a row. It is desirable to 
> provide these changes in the order of their arrival. One can visualize the 
> delivery of these changes through a stream from a Phoenix table to the 
> application that is initiated by the application similar to the delivery of 
> any other Phoenix query results. The difference is that a regular query 
> result includes at most one result row for each row satisfying the query and 
> the deleted rows are not visible to the query result while the CDC 
> stream/result can include multiple result rows for each row and the result 
> includes deleted rows. Some use cases need to also get the pre and/or post 
> image of the row along with a change on the row. 
> The design proposed here leverages Phoenix Max Lookback and Uncovered (Global 
> or Local) Indexes. The max lookback feature retains recent changes to a 
> table, that is, the changes that have been done in the last x days typically. 
> This means that the max lookback feature already captures the changes to a 
> given table. Currently, the max lookback age is configurable at the cluster 
> level. We need to extend this capability to be able to configure the max 
> lookback age at the table level so that each table can have a different max 
> lookback age based on its CDC application requirements.
> To deliver the changes in the order of their arrival, we need a time based 
> index. This index should be uncovered as the changes are already retained in 
> the table by the max lookback feature. The arrival time can be defined as the 
> mutation timestamp generated by the server, or a user-specified timestamp (or 
> any other long integer) column. An uncovered index would allow us to 
> efficiently and orderly access to the changes. Changes to an index table are 
> also preserved by the max lookback feature.
> A CDC feature can be composed of the following components:
>  * {*}CDCUncoveredIndexRegionScanner{*}: This is a server side scanner on an 
> uncovered index used for CDC. This can inherit UncoveredIndexRegionScanner. 
> It goes through index table rows using a raw scan to identify data table rows 
> and retrieves these rows using a raw scan. Using the time range, it forms a 
> JSON blob to represent changes to the row including pre and/or post row 
> images.
>  * {*}CDC Query Compiler{*}: This is a client side component. It prepares the 
> scan object based on the given CDC query statement. 
>  * {*}CDC DDL Compiler{*}: This is a client side component. It creates the 
> time based uncovered (global/local) index based on the given CDC DDL 
> statement and a virtual table of CDC type. CDC will be a new table type. 
> A CDC DDL syntax to create CDC on a (data) table can be as follows: 
> Create CDC  on  (PHOENIX_ROW_TIMESTAMP()  | 
> ) INCLUDE (pre | post | latest | all) TTL =  seconds> INDEX =  SALT_BUCKETS=
> The above CDC DDL creates a virtual CDC table and an uncovered index. The CDC 
> table PK columns start with the timestamp or user defined column and continue 
> with the data table PK columns. The CDC table includes one non-PK column 
> which is a JSON column. The change is expressed in this JSON column in 
> multiple ways based on the CDC DDL or query statement. The change can be 
> expressed as just the mutation for the change, the latest image of the row, 
> the pre image of the row (the image before the change), the post image, or 
> any combination of these. The CDC table is not a physical table on disk. It 
> is just a virtual table to be used in a CDC query. Phoenix stores just the 
> metadata for this virtual table. 
> A CDC query can be as follow:
> Select * from  where PHOENIX_ROW_TIMEST

[jira] [Created] (PHOENIX-7154) SELECT query with undefined column on an UNCOVERED INDEX results in StringIndexOutOfBoundsException

2023-12-15 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7154:
--

 Summary: SELECT query with undefined column on an UNCOVERED INDEX 
results in StringIndexOutOfBoundsException 
 Key: PHOENIX-7154
 URL: https://issues.apache.org/jira/browse/PHOENIX-7154
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.1.4
Reporter: Hari Krishna Dara


If you run a SELECT query directly on an uncovered index with a column name 
that is undefined for that index, you get a 
{{{}java.lang.StringIndexOutOfBoundsException{}}}.  In the below sample, you 
can see that using a valid column, the query worked fine, but an undefined 
column caused the exception.

 

{{0: jdbc:phoenix:localhost> create table t (k INTEGER PRIMARY KEY, v1 
INTEGER);}}
{{No rows affected (0.64 seconds)}}
{{0: jdbc:phoenix:localhost> create uncovered index tuidx on t 
(PHOENIX_ROW_TIMESTAMP());}}
{{No rows affected (5.671 seconds)}}
{{0: jdbc:phoenix:localhost> select abc from tuidx;}}
{{java.lang.StringIndexOutOfBoundsException: String index out of range: -1}}
{{        at java.lang.String.substring(String.java:1967)}}
{{        at 
org.apache.phoenix.util.IndexUtil.getDataColumnFamilyName(IndexUtil.java:200)}}
{{        at 
org.apache.phoenix.schema.IndexUncoveredDataColumnRef.(IndexUncoveredDataColumnRef.java:51)}}
{{        at 
org.apache.phoenix.compile.TupleProjectionCompiler$ColumnRefVisitor.visit(TupleProjectionCompiler.java:269)}}
{{        at 
org.apache.phoenix.compile.TupleProjectionCompiler$ColumnRefVisitor.visit(TupleProjectionCompiler.java:245)}}
{{        at 
org.apache.phoenix.parse.ColumnParseNode.accept(ColumnParseNode.java:56)}}
{{        at 
org.apache.phoenix.compile.TupleProjectionCompiler.createProjectedTable(TupleProjectionCompiler.java:127)}}
{{        at 
org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:701)}}
{{        at 
org.apache.phoenix.compile.QueryCompiler.compileSingleQuery(QueryCompiler.java:667)}}
{{        at 
org.apache.phoenix.compile.QueryCompiler.compileSelect(QueryCompiler.java:249)}}
{{        at 
org.apache.phoenix.compile.QueryCompiler.compile(QueryCompiler.java:181)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:724)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:687)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:368)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:349)}}
{{        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:349)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:335)}}
{{        at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:2362)}}
{{        at sqlline.Commands.executeSingleQuery(Commands.java:1054)}}
{{        at sqlline.Commands.execute(Commands.java:1003)}}
{{        at sqlline.Commands.sql(Commands.java:967)}}
{{        at sqlline.SqlLine.dispatch(SqlLine.java:734)}}
{{        at sqlline.SqlLine.begin(SqlLine.java:541)}}
{{        at sqlline.SqlLine.start(SqlLine.java:267)}}
{{        at sqlline.SqlLine.main(SqlLine.java:206)}}
{{0: jdbc:phoenix:localhost> select ":K" from tuidx;}}
{{++}}
{{| :K |}}
{{++}}
{{++}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7014) CDC query complier and optimizer

2023-11-14 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara reassigned PHOENIX-7014:
--

Assignee: Hari Krishna Dara

> CDC query complier and optimizer
> 
>
> Key: PHOENIX-7014
> URL: https://issues.apache.org/jira/browse/PHOENIX-7014
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Assignee: Hari Krishna Dara
>Priority: Major
>
> For CDC table type, the query optimizer should be able to query from the 
> uncovered global index table with data table associated with the given CDC 
> table.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7055) Usage improvements for sqline.py

2023-09-28 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7055:
--

 Summary: Usage improvements for sqline.py
 Key: PHOENIX-7055
 URL: https://issues.apache.org/jira/browse/PHOENIX-7055
 Project: Phoenix
  Issue Type: Improvement
Reporter: Hari Krishna Dara
Assignee: Hari Krishna Dara


A few small improvements to make the usage of this tool easier:
 * It should be possible to start sqline without making a connection. This 
useful to open a custom connection from the prompt and also simply browse 
through the history.
 * Start in debug mode so that we can connect from a debug client.
 * Fix bugs in the existing boolean option interpretations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497646#comment-16497646
 ] 

Hari Krishna Dara edited comment on PHOENIX-4761 at 6/1/18 6:51 AM:


I looked at the server configuration and it seems that Disruptor is not part of 
the HBase server dependencies in 0.98 (yes, we are still using this version):

https://github.com/apache/hbase/blob/0.98/hbase-server/pom.xml

Where as the newer HBase versions have it, e.g., see:

https://github.com/apache/hbase/blob/branch-1.2/hbase-server/pom.xml#L557




was (Author: haridsv):
I looked at the server configuration and it seems that Disruptor is not part of 
the server dependencies in 0.98 (yes, we are still using this version):

https://github.com/apache/hbase/blob/0.98/hbase-server/pom.xml

Where as the newer versions have it, e.g., see:

https://github.com/apache/hbase/blob/branch-1.2/hbase-server/pom.xml#L557



> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497646#comment-16497646
 ] 

Hari Krishna Dara commented on PHOENIX-4761:


I looked at the server configuration and it seems that Disruptor is not part of 
the server dependencies in 0.98 (yes, we are still using this version):

https://github.com/apache/hbase/blob/0.98/hbase-server/pom.xml

Where as the newer versions have it, e.g., see:

https://github.com/apache/hbase/blob/branch-1.2/hbase-server/pom.xml#L557



> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497631#comment-16497631
 ] 

Hari Krishna Dara commented on PHOENIX-4761:


Looks like I can't change the status or resolution, but this can be marked 
invalid of the equivalent.

> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497630#comment-16497630
 ] 

Hari Krishna Dara commented on PHOENIX-4761:


Thanks [~elserj] and [~an...@apache.org] for your replies. Our clients are 
using phoenix-core instead of phoenix-client, so that explains the client side 
issue here. Our server side programs do have HBase classpath along with the 
sever jar, so I am not sure why it didn't find the Disruptor classes. It does 
look like an issue with our configuration than a Phoenix issue, so I will close 
this jira.

> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16496552#comment-16496552
 ] 

Hari Krishna Dara commented on PHOENIX-4761:


[~elserj]You are right, it is a client side exception, not sure why I updated 
the server jar, I think I misunderstood which jar the tool was loadin. It was 
running on the server and must have had both in the classpath as the fix worked 
(loaded the classes from server jar instead of the client jar). I just noticed 
that another one of the tests that runs on the client failed for the same 
classes, so yes, the fix was not right. I will come back with another.

> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-31 Thread Hari Krishna Dara (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-4761:
---
Attachment: PHOENIX-4761.patch

> java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
> ---
>
> Key: PHOENIX-4761
> URL: https://issues.apache.org/jira/browse/PHOENIX-4761
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Hari Krishna Dara
>Priority: Major
> Attachments: PHOENIX-4761.patch
>
>
> There was a recent additional dependency on this 3rd party library, but it is 
> not made available at runtime via the assembly, so I am seeing the below 
> exception:
> {noformat}
> Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
>   at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
>   at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
>   at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
>   at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
>   at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
>   at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
>   at 
> com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {noformat}
> The



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4761) java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory

2018-05-30 Thread Hari Krishna Dara (JIRA)
Hari Krishna Dara created PHOENIX-4761:
--

 Summary: java.lang.NoClassDefFoundError: 
com/lmax/disruptor/EventFactory
 Key: PHOENIX-4761
 URL: https://issues.apache.org/jira/browse/PHOENIX-4761
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Hari Krishna Dara


There was a recent additional dependency on this 3rd party library, but it is 
not made available at runtime via the assembly, so I am seeing the below 
exception:

{noformat}
Caused by: java.lang.NoClassDefFoundError: com/lmax/disruptor/EventFactory
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.(ConnectionQueryServicesImpl.java:414)
at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:248)
at org.apache.phoenix.jdbc.PhoenixDriver$3.call(PhoenixDriver.java:241)
at 
com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4796)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3589)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2374)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2337)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2252)
at com.google.common.cache.LocalCache.get(LocalCache.java:3990)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4793)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:241)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
{noformat}

The




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4040) Protect against NPEs in org.apache.phoenix.compile.DeleteCompiler.deleteRows

2017-07-18 Thread Hari Krishna Dara (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-4040:
---
Description: 
We are occasionally seeing the below NPE coming from Phoenix code. We don't 
currently have a repro case for this, but since it is an NPE, Phoenix code 
should protected against it.

{noformat}
org.apache.phoenix.exception.PhoenixIOException: java.lang.NullPointerException
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:854)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:798)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.compile.DeleteCompiler$3.execute(DeleteCompiler.java:668)
at 
org.apache.phoenix.compile.DeleteCompiler$MultiDeleteMutationPlan.execute(DeleteCompiler.java:284)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:355)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:337)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:251)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
at 
phoenix.connection.ProtectedPhoenixPreparedStatement.execute(ProtectedPhoenixPreparedStatement.java:74)
...
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:849)
... 40 more
Caused by: java.lang.NullPointerException
at 
org.apache.phoenix.compile.DeleteCompiler.deleteRows(DeleteCompiler.java:105)
at 
org.apache.phoenix.compile.DeleteCompiler.access$000(DeleteCompiler.java:93)
at 
org.apache.phoenix.compile.DeleteCompiler$DeletingParallelIteratorFactory.mutate(DeleteCompiler.java:219)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:114)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
... 3 more
{noformat}


  was:
We are occasionally seeing the below NPE coming from Phoenix code. We don't 
currently have a repro case for this, but since it is an NPE, Phoenix code 
should protected against it.

{{org.apache.phoenix.exception.PhoenixIOException: 
java.lang.NullPointerException
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:854)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:798)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.compile.DeleteCompiler$3.execute(DeleteCompiler.java:668)
at 
org.apache.phoenix.compile.DeleteCompiler$MultiDeleteMutationPlan.execute(DeleteCompiler.java:284)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:355)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:337)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:251)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
at 
phoenix.connection.ProtectedPhoenixPreparedStatement.execute(ProtectedPhoenixPreparedStatement.java:74)
...
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
   

[jira] [Created] (PHOENIX-4040) Protect against NPEs in org.apache.phoenix.compile.DeleteCompiler.deleteRows

2017-07-18 Thread Hari Krishna Dara (JIRA)
Hari Krishna Dara created PHOENIX-4040:
--

 Summary: Protect against NPEs in 
org.apache.phoenix.compile.DeleteCompiler.deleteRows
 Key: PHOENIX-4040
 URL: https://issues.apache.org/jira/browse/PHOENIX-4040
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.10.0
Reporter: Hari Krishna Dara
Priority: Minor


We are occasionally seeing the below NPE coming from Phoenix code. We don't 
currently have a repro case for this, but since it is an NPE, Phoenix code 
should protected against it.

{{org.apache.phoenix.exception.PhoenixIOException: 
java.lang.NullPointerException
at 
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:854)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:798)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.getIterators(RoundRobinResultIterator.java:176)
at 
org.apache.phoenix.iterate.RoundRobinResultIterator.next(RoundRobinResultIterator.java:91)
at 
org.apache.phoenix.compile.DeleteCompiler$3.execute(DeleteCompiler.java:668)
at 
org.apache.phoenix.compile.DeleteCompiler$MultiDeleteMutationPlan.execute(DeleteCompiler.java:284)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:355)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:338)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:337)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:251)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
at 
org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
at 
phoenix.connection.ProtectedPhoenixPreparedStatement.execute(ProtectedPhoenixPreparedStatement.java:74)
...
Caused by: java.util.concurrent.ExecutionException: 
java.lang.NullPointerException
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:849)
... 40 more
Caused by: java.lang.NullPointerException
at 
org.apache.phoenix.compile.DeleteCompiler.deleteRows(DeleteCompiler.java:105)
at 
org.apache.phoenix.compile.DeleteCompiler.access$000(DeleteCompiler.java:93)
at 
org.apache.phoenix.compile.DeleteCompiler$DeletingParallelIteratorFactory.mutate(DeleteCompiler.java:219)
at 
org.apache.phoenix.compile.MutatingParallelIteratorFactory.newIterator(MutatingParallelIteratorFactory.java:59)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:114)
at 
org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
... 3 more}}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-1363) java.lang.ArrayIndexOutOfBoundsException with min/max query on CHAR column with '0' prefixed values

2014-10-20 Thread Hari Krishna Dara (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176752#comment-14176752
 ] 

Hari Krishna Dara commented on PHOENIX-1363:


I looked at the branches and found nothing newer than 4.1.0 and that is what I 
was using. I tried using master (5.0-SNAPSHOT) but it seemed to be incompatible 
in my cluster. What specific branch are you referring to?

> java.lang.ArrayIndexOutOfBoundsException with min/max query on CHAR column 
> with '0' prefixed values
> ---
>
> Key: PHOENIX-1363
> URL: https://issues.apache.org/jira/browse/PHOENIX-1363
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1
> Environment: HBase 0.98.4
> RHEL 6.5
>Reporter: Hari Krishna Dara
>  Labels: aggregate, char
>
> While playing with the queries to reproduce PHOENIX-1362, I got the below 
> exception (take the same schema and data as in PHOENIX-1362):
> {noformat}
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), min(VAL3) from TT;
> +++
> | MIN(VAL2)  | MIN(VAL3)  |
> +++
> java.lang.ArrayIndexOutOfBoundsException
> at java.lang.System.arraycopy(Native Method)
> at 
> org.apache.phoenix.schema.KeyValueSchema.writeVarLengthField(KeyValueSchema.java:150)
> at 
> org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:116)
> at 
> org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:91)
> at 
> org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:109)
> at 
> org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:83)
> at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
> at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
> at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2429)
> at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
> at sqlline.SqlLine.print(SqlLine.java:1735)
> at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
> at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
> at sqlline.SqlLine.dispatch(SqlLine.java:821)
> at sqlline.SqlLine.begin(SqlLine.java:699)
> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
> at sqlline.SqlLine.main(SqlLine.java:424)
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), min(VAL2) from TT;
> +++
> | MIN(VAL1)  | MIN(VAL2)  |
> +++
> | 0  | null   |
> +++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-1362) Min/max query on CHAR columns containing values with '0' as prefix always returns null

2014-10-17 Thread Hari Krishna Dara (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14174801#comment-14174801
 ] 

Hari Krishna Dara commented on PHOENIX-1362:


I just found a workaround, feel free to lower the priority:

{noformat}
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(cast(VAL2 as VARCHAR)), 
max(cast(VAL2 as VARCHAR)) from TT;
+---+---+
| MIN(TO_VARCHAR(VAL2)) | MAX(TO_VARCHAR(VAL2)) |
+---+---+
| 00| 02|
+---+---+
{noformat}

> Min/max query on CHAR columns containing values with '0' as prefix always 
> returns null
> --
>
> Key: PHOENIX-1362
> URL: https://issues.apache.org/jira/browse/PHOENIX-1362
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1
> Environment: HBase 0.98.4
> RHEL 6.5
>Reporter: Hari Krishna Dara
>  Labels: aggregate, char
>
> - Create a table with CHAR type and insert a few strings that start with 0.
> - Select min()/max() on the column, you always get null value.
> {noformat}
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> create table TT(VAL1 integer not 
> null, VAL2 char(2), val3 varchar, VAL4 varchar constraint PK primary key 
> (VAL1));
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (0, '00', '00', 
> '0');
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (1, '01', '01', 
> '1');
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (2, '02', '02', 
> '2');
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select * from TT;
> ++--+++
> |VAL1| VAL2 |VAL3|VAL4|
> ++--+++
> | 0  | 00   | 00 | 0  |
> | 1  | 01   | 01 | 1  |
> | 2  | 02   | 02 | 2  |
> ++--+++
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), max(VAL1) from TT;
> +++
> | MIN(VAL1)  | MAX(VAL1)  |
> +++
> | 0  | 2  |
> +++
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), max(VAL2) from TT;
> +++
> | MIN(VAL2)  | MAX(VAL2)  |
> +++
> | null   | null   |
> +++
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL3), max(VAL3) from TT;
> +++
> | MIN(VAL3)  | MAX(VAL3)  |
> +++
> | 00 | 02 |
> +++
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL4), max(VAL4) from TT;
> +++
> | MIN(VAL4)  | MAX(VAL4)  |
> +++
> | 0  | 2  |
> +++
> {noformat}
> As you can see, the query on VAL2 which is of type CHAR(2) returns null, 
> while the same exact values on VAL3 which is of type VARCHAR work as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-1363) java.lang.ArrayIndexOutOfBoundsException with min/max query on CHAR column with '0' prefixed values

2014-10-17 Thread Hari Krishna Dara (JIRA)
Hari Krishna Dara created PHOENIX-1363:
--

 Summary: java.lang.ArrayIndexOutOfBoundsException with min/max 
query on CHAR column with '0' prefixed values
 Key: PHOENIX-1363
 URL: https://issues.apache.org/jira/browse/PHOENIX-1363
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
 Environment: HBase 0.98.4
RHEL 6.5
Reporter: Hari Krishna Dara


While playing with the queries to reproduce PHOENIX-1362, I got the below 
exception (take the same schema and data as in PHOENIX-1362):

{noformat}
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), min(VAL3) from TT;
+++
| MIN(VAL2)  | MIN(VAL3)  |
+++
java.lang.ArrayIndexOutOfBoundsException
at java.lang.System.arraycopy(Native Method)
at 
org.apache.phoenix.schema.KeyValueSchema.writeVarLengthField(KeyValueSchema.java:150)
at 
org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:116)
at 
org.apache.phoenix.schema.KeyValueSchema.toBytes(KeyValueSchema.java:91)
at 
org.apache.phoenix.expression.aggregator.Aggregators.toBytes(Aggregators.java:109)
at 
org.apache.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:83)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:732)
at sqlline.SqlLine$IncrementalRows.hasNext(SqlLine.java:2429)
at sqlline.SqlLine$TableOutputFormat.print(SqlLine.java:2074)
at sqlline.SqlLine.print(SqlLine.java:1735)
at sqlline.SqlLine$Commands.execute(SqlLine.java:3683)
at sqlline.SqlLine$Commands.sql(SqlLine.java:3584)
at sqlline.SqlLine.dispatch(SqlLine.java:821)
at sqlline.SqlLine.begin(SqlLine.java:699)
at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:441)
at sqlline.SqlLine.main(SqlLine.java:424)
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), min(VAL2) from TT;
+++
| MIN(VAL1)  | MIN(VAL2)  |
+++
| 0  | null   |
+++
{noformat}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-1362) Min/max query on CHAR columns containing values with '0' as prefix always returns null

2014-10-17 Thread Hari Krishna Dara (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-1362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-1362:
---
Description: 
- Create a table with CHAR type and insert a few strings that start with 0.
- Select min()/max() on the column, you always get null value.

{noformat}
0: jdbc:phoenix:isthbase01-mnds2-1-crd> create table TT(VAL1 integer not null, 
VAL2 char(2), val3 varchar, VAL4 varchar constraint PK primary key (VAL1));
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (0, '00', '00', 
'0');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (1, '01', '01', 
'1');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (2, '02', '02', 
'2');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select * from TT;
++--+++
|VAL1| VAL2 |VAL3|VAL4|
++--+++
| 0  | 00   | 00 | 0  |
| 1  | 01   | 01 | 1  |
| 2  | 02   | 02 | 2  |
++--+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), max(VAL1) from TT;
+++
| MIN(VAL1)  | MAX(VAL1)  |
+++
| 0  | 2  |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), max(VAL2) from TT;
+++
| MIN(VAL2)  | MAX(VAL2)  |
+++
| null   | null   |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL3), max(VAL3) from TT;
+++
| MIN(VAL3)  | MAX(VAL3)  |
+++
| 00 | 02 |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL4), max(VAL4) from TT;
+++
| MIN(VAL4)  | MAX(VAL4)  |
+++
| 0  | 2  |
+++
{noformat}

As you can see, the query on VAL2 which is of type CHAR(2) returns null, while 
the same exact values on VAL3 which is of type VARCHAR work as expected.

  was:
- Create a table with CHAR type and insert a few strings that start with 0.
- Select min()/max() on the column, you always get null value.

0: jdbc:phoenix:isthbase01-mnds2-1-crd> create table TT(VAL1 integer not null, 
VAL2 char(2), val3 varchar, VAL4 varchar constraint PK primary key (VAL1));
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (0, '00', '00', 
'0');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (1, '01', '01', 
'1');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (2, '02', '02', 
'2');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select * from TT;
++--+++
|VAL1| VAL2 |VAL3|VAL4|
++--+++
| 0  | 00   | 00 | 0  |
| 1  | 01   | 01 | 1  |
| 2  | 02   | 02 | 2  |
++--+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), max(VAL1) from TT;
+++
| MIN(VAL1)  | MAX(VAL1)  |
+++
| 0  | 2  |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), max(VAL2) from TT;
+++
| MIN(VAL2)  | MAX(VAL2)  |
+++
| null   | null   |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL3), max(VAL3) from TT;
+++
| MIN(VAL3)  | MAX(VAL3)  |
+++
| 00 | 02 |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL4), max(VAL4) from TT;
+++
| MIN(VAL4)  | MAX(VAL4)  |
+++
| 0  | 2  |
+++

As you can see, the query on VAL2 which is of type CHAR(2) returns null, while 
the same exact values on VAL3 which is of type VARCHAR work as expected.


> Min/max query on CHAR columns containing values with '0' as prefix always 
> returns null
> --
>
> Key: PHOENIX-1362
> URL: https://issues.apache.org/jira/browse/PHOENIX-1362
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.1
> Environment: HBase 0.98.4
> RHEL 6.5
>Reporter: Hari Krishna Dara
>  Labels: aggregate, char
>
> - Create a table with CHAR type and insert a few strings that start with 0.
> - Select min()/max() on the column, you always get null value.
> {noformat}
> 0: jdbc:phoenix:isthbase01-mnds2-1-crd> create table TT(VAL1 integer not 
> null, VAL2 char(2), val3 varchar, VAL4 varchar constraint PK pri

[jira] [Created] (PHOENIX-1362) Min/max query on CHAR columns containing values with '0' as prefix always returns null

2014-10-16 Thread Hari Krishna Dara (JIRA)
Hari Krishna Dara created PHOENIX-1362:
--

 Summary: Min/max query on CHAR columns containing values with '0' 
as prefix always returns null
 Key: PHOENIX-1362
 URL: https://issues.apache.org/jira/browse/PHOENIX-1362
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.1
 Environment: HBase 0.98.4
RHEL 6.5
Reporter: Hari Krishna Dara


- Create a table with CHAR type and insert a few strings that start with 0.
- Select min()/max() on the column, you always get null value.

0: jdbc:phoenix:isthbase01-mnds2-1-crd> create table TT(VAL1 integer not null, 
VAL2 char(2), val3 varchar, VAL4 varchar constraint PK primary key (VAL1));
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (0, '00', '00', 
'0');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (1, '01', '01', 
'1');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> upsert into TT values (2, '02', '02', 
'2');
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select * from TT;
++--+++
|VAL1| VAL2 |VAL3|VAL4|
++--+++
| 0  | 00   | 00 | 0  |
| 1  | 01   | 01 | 1  |
| 2  | 02   | 02 | 2  |
++--+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL1), max(VAL1) from TT;
+++
| MIN(VAL1)  | MAX(VAL1)  |
+++
| 0  | 2  |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL2), max(VAL2) from TT;
+++
| MIN(VAL2)  | MAX(VAL2)  |
+++
| null   | null   |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL3), max(VAL3) from TT;
+++
| MIN(VAL3)  | MAX(VAL3)  |
+++
| 00 | 02 |
+++
0: jdbc:phoenix:isthbase01-mnds2-1-crd> select min(VAL4), max(VAL4) from TT;
+++
| MIN(VAL4)  | MAX(VAL4)  |
+++
| 0  | 2  |
+++

As you can see, the query on VAL2 which is of type CHAR(2) returns null, while 
the same exact values on VAL3 which is of type VARCHAR work as expected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)