[jira] [Resolved] (PHOENIX-7339) HBase flushes with custom clock needs to disable remote procedure delay

2024-06-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7339.
---
Resolution: Fixed

> HBase flushes with custom clock needs to disable remote procedure delay
> ---
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Test
>Reporter: Istvan Toth
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7339) HBase flushes with custom clock needs to disable remote procedure delay

2024-06-24 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7339:
--
Fix Version/s: 5.2.1
   5.3.0

> HBase flushes with custom clock needs to disable remote procedure delay
> ---
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7318) Support JSON_MODIFY in Upserts

2024-06-24 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri resolved PHOENIX-7318.
---
Resolution: Fixed

> Support JSON_MODIFY in Upserts
> --
>
> Key: PHOENIX-7318
> URL: https://issues.apache.org/jira/browse/PHOENIX-7318
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> JSON_MODIFY implementation targeted as part of PHOENIX-7072 only supports in 
> Atomic Upserts. This Jira is to support in Upsert Statements. A POC initially 
> tried had issue with auto commit, so this is a separate work item to rethink 
> on the implemenation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7015) Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase

2024-06-24 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7015.

Resolution: Fixed

> Extend UncoveredGlobalIndexRegionScanner for CDC region scanner usecase
> ---
>
> Key: PHOENIX-7015
> URL: https://issues.apache.org/jira/browse/PHOENIX-7015
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Viraj Jasani
>Priority: Major
>
> For CDC region scanner usecase, extend UncoveredGlobalIndexRegionScanner to 
> CDCUncoveredGlobalIndexRegionScanner. The new region scanner for CDC performs 
> raw scan to index table and retrieve data table rows from index rows.
> Using the time range, it can form a JSON blob to represent changes to the row 
> including pre and/or post row images.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7345) Support for alternative indexing scheme for CDC

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7345:
--

 Summary: Support for alternative indexing scheme for CDC
 Key: PHOENIX-7345
 URL: https://issues.apache.org/jira/browse/PHOENIX-7345
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


When a CDC table is created, an indexis created on the PHOENIX_ROW_TIMESTMAP(), 
which makes it possible to run range scans efficiently on the change timestamp. 
Since indexes always include the PK columns of the data table, additional 
filtering on the data table PK columns can also be done efficiently. However, a 
use case may require filtering based on a specific order of columns that 
includes both data and PK columns, so having support for customizing the PK for 
the CDC index will be beneficial.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7344) Support for Dynamic Columns

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7344:
--

 Summary: Support for Dynamic Columns
 Key: PHOENIX-7344
 URL: https://issues.apache.org/jira/browse/PHOENIX-7344
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


CDC recognizes changes for only those columns with static metadata, which means 
Dynamic Columns are completely ignored. We need to extend the functionality 
such that the SELECT queries on CDC objects to also support Dynamic Columns.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7343) Support for complex types in CDC

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7343:
--

 Summary: Support for complex types in CDC
 Key: PHOENIX-7343
 URL: https://issues.apache.org/jira/browse/PHOENIX-7343
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


Support for the two complex types, viz., ARRAY and JSON need to be added for 
CDC.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7342) Optimize data table scan range based on the startRow/endRow from Scan

2024-06-24 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara updated PHOENIX-7342:
---
Description: When a time range is specified in a SELECT query on CDC, it is 
possible to optimize the scan on data table by setting the time range.  (was: 
Currently CDC can be created to use an UNCOVERED global index, but it should be 
possible to make use of a LOCAL index as well. )
Summary: Optimize data table scan range based on the startRow/endRow 
from Scan  (was: Support for using a local index type)

> Optimize data table scan range based on the startRow/endRow from Scan
> -
>
> Key: PHOENIX-7342
> URL: https://issues.apache.org/jira/browse/PHOENIX-7342
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Hari Krishna Dara
>Priority: Minor
>
> When a time range is specified in a SELECT query on CDC, it is possible to 
> optimize the scan on data table by setting the time range.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7342) Support for using a local index type

2024-06-24 Thread Hari Krishna Dara (Jira)
Hari Krishna Dara created PHOENIX-7342:
--

 Summary: Support for using a local index type
 Key: PHOENIX-7342
 URL: https://issues.apache.org/jira/browse/PHOENIX-7342
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Hari Krishna Dara


Currently CDC can be created to use an UNCOVERED global index, but it should be 
possible to make use of a LOCAL index as well. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7339) HBase flushes with custom clock needs to disable remote procedure delay

2024-06-21 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7339:
--
Summary: HBase flushes with custom clock needs to disable remote procedure 
delay  (was: Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6)

> HBase flushes with custom clock needs to disable remote procedure delay
> ---
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7341) Create test source artifacts in connectors

2024-06-21 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7341:


 Summary: Create test source artifacts in connectors
 Key: PHOENIX-7341
 URL: https://issues.apache.org/jira/browse/PHOENIX-7341
 Project: Phoenix
  Issue Type: Improvement
  Components: connectors
Reporter: Istvan Toth
Assignee: Istvan Toth


For completeness, we should also package the test sources.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7325) Connectors does not create source jars

2024-06-21 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-7325.
--
Fix Version/s: connectors-6.0.0
 Assignee: Istvan Toth
   Resolution: Fixed

This is not an issue anymore in the current code.

> Connectors does not create source jars
> --
>
> Key: PHOENIX-7325
> URL: https://issues.apache.org/jira/browse/PHOENIX-7325
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: connectors-6.0.0
>
>
> When connectors is built, source jars are not built for at least for some of 
> the packages .
> Make sure that all packages which have Java code generate a source jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7340) Support non-root context in REST RemoteHTable and RemodeAdmin

2024-06-21 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7340:


 Summary: Support non-root context in REST RemoteHTable and 
RemodeAdmin 
 Key: PHOENIX-7340
 URL: https://issues.apache.org/jira/browse/PHOENIX-7340
 Project: Phoenix
  Issue Type: Improvement
Reporter: Istvan Toth


RemoteHTable and RemodeAdmin expect that the REST server is in the root context.

This is not true when the REST server is accessed via a reverse proxy like 
Apache Knox.

Make it possible to set the context for those classes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7321) Rename PhoenixIndexBuilderHelper to AtomicUpsertHelper

2024-06-20 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7321:
--
Description: While looking at 
[PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
 at master · apache/phoenix 
(github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
 I see the class only deals with On Duplicate Key related functionality. May be 
we need to rename this to AtomicUpsertHelper or something of that sort as it 
currently doesn't deal with any Index help.  (was: While looking at 
[PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
 at master · apache/phoenix 
(github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
 I see the class only deals with On Duplicate Key related functionality. May be 
we need to rename this to OnDuplicateKeyHelper or something of that sort as it 
currently doesn't deal with any Index help.)

> Rename PhoenixIndexBuilderHelper to AtomicUpsertHelper
> --
>
> Key: PHOENIX-7321
> URL: https://issues.apache.org/jira/browse/PHOENIX-7321
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> While looking at 
> [PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
>  at master · apache/phoenix 
> (github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
>  I see the class only deals with On Duplicate Key related functionality. May 
> be we need to rename this to AtomicUpsertHelper or something of that sort as 
> it currently doesn't deal with any Index help.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7321) Rename PhoenixIndexBuilderHelper to AtomicUpsertHelper

2024-06-20 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7321:
--
Summary: Rename PhoenixIndexBuilderHelper to AtomicUpsertHelper  (was: 
Rename PhoenixIndexBuilderHelper to something different as it seems to handle 
only onDupKey logic)

> Rename PhoenixIndexBuilderHelper to AtomicUpsertHelper
> --
>
> Key: PHOENIX-7321
> URL: https://issues.apache.org/jira/browse/PHOENIX-7321
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> While looking at 
> [PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
>  at master · apache/phoenix 
> (github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
>  I see the class only deals with On Duplicate Key related functionality. May 
> be we need to rename this to OnDuplicateKeyHelper or something of that sort 
> as it currently doesn't deal with any Index help.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-20 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7339:


 Summary: Multibranch Jenkins Job takes more than 5 hours with 
Hbase 2.6
 Key: PHOENIX-7339
 URL: https://issues.apache.org/jira/browse/PHOENIX-7339
 Project: Phoenix
  Issue Type: Bug
Reporter: Istvan Toth


The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted after 
5 hours with 2.6.

While I did not see OOM errors, this could still be GC thrashing, as newer 
HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: (was: JSON Support Documentation.jpeg)

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Support in Features dropdown 
> menu.png, Json Datatype info.png, Json Datatype.png, Json Functions 
> Index.png, Json Functions description.png, Json Support Documentation.png, 
> Json Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: Json Support Documentation.png

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Support in Features dropdown 
> menu.png, Json Datatype info.png, Json Datatype.png, Json Functions 
> Index.png, Json Functions description.png, Json Support Documentation.png, 
> Json Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: Json Functions Index.png

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Support Documentation.jpeg, 
> JSON Support in Features dropdown menu.png, Json Datatype info.png, Json 
> Datatype.png, Json Functions Index.png, Json Functions description.png, Json 
> Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: (was: JSON Functions Index.png)

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Support Documentation.jpeg, 
> JSON Support in Features dropdown menu.png, Json Datatype info.png, Json 
> Datatype.png, Json Functions Index.png, Json Functions description.png, Json 
> Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: Json Functions description.png

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Functions Index.png, JSON 
> Support Documentation.jpeg, JSON Support in Features dropdown menu.png, Json 
> Datatype info.png, Json Datatype.png, Json Functions description.png, Json 
> Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: (was: JSON Functions description.png)

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Functions Index.png, JSON 
> Support Documentation.jpeg, JSON Support in Features dropdown menu.png, Json 
> Datatype info.png, Json Datatype.png, Json Functions description.png, Json 
> Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7098) Document JSON functionality on the Phoenix Site

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7098:
--
Attachment: Adding_JSON_Support_documentation_4.patch

> Document JSON functionality on the Phoenix Site
> ---
>
> Key: PHOENIX-7098
> URL: https://issues.apache.org/jira/browse/PHOENIX-7098
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Attachments: Adding_JSON_Support_documentation.patch, 
> Adding_JSON_Support_documentation_2.patch, 
> Adding_JSON_Support_documentation_3.patch, 
> Adding_JSON_Support_documentation_4.patch, JSON Functions Index.png, JSON 
> Functions description.png, JSON Support Documentation.jpeg, JSON Support in 
> Features dropdown menu.png, Json Datatype info.png, Json Datatype.png, Json 
> Support at the bottom.png
>
>
> Adding documentation for the JSON Support feature



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7303) fix CVE-2024-29025 in netty package

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-7303.
--
Fix Version/s: 5.2.1
   5.3.0
   5.1.4
   Resolution: Fixed

Merged to all active branches.
Thank you [~nikitapande].

> fix CVE-2024-29025 in netty package
> ---
>
> Key: PHOENIX-7303
> URL: https://issues.apache.org/jira/browse/PHOENIX-7303
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
> Fix For: 5.2.1, 5.3.0, 5.1.4
>
>
> [CVE-2024-29025|https://github.com/advisories/GHSA-5jpm-x58v-624v] is the CVE 
> for all netty-codec-http <  4.1.108.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7338) Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on transactional table

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-7338.
--
Fix Version/s: 5.1.4
   Resolution: Fixed

Merged to 5.1.
Thanks for the patch [~nikitapande].

> Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on 
> transactional table
> 
>
> Key: PHOENIX-7338
> URL: https://issues.apache.org/jira/browse/PHOENIX-7338
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
> Fix For: 5.1.4
>
>
> I get the following error when I invoke CSV Bulkload for transactional table 
> CLI:
> hbase org.apache.phoenix.mapreduce.CsvBulkLoadTool --table   table> --input 
> {code:java}
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:117)
> Exception in thread "main" Global tag is not allowed: 
> tag:yaml.org,2002:org.apache.omid.tso.client.OmidClientConfiguration
>  in 'string', line 5, column 26:
>     omidClientConfiguration: !!org.apache.omid.tso.client.Omi ... 
>                              ^
>   at 
> org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:259)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:207)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:369)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:348)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:323)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:209)
>   at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:131)
>   at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:157)
>   at 
> org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
>   at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:493)
>   at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:473)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadStringAsMap(YAMLUtils.java:87)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadAsMap(YAMLUtils.java:75)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:62)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:45)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:71)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:58)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionClient(OmidTransactionProvider.java:72)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.initTransactionClient(ConnectionQueryServicesImpl.java:5907)
>   at 
> org.apache.phoenix.transaction.OmidTransactionContext.(OmidTransactionContext.java:60)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionContext(OmidTransactionProvider.java:65)
>   at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:390)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:613)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:547)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:447)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:232)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:210)
>   at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:177)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:537)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery

[jira] [Assigned] (PHOENIX-7338) Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on transactional table

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7338:


Assignee: Nikita Pande

> Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on 
> transactional table
> 
>
> Key: PHOENIX-7338
> URL: https://issues.apache.org/jira/browse/PHOENIX-7338
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
>
> I get the following error when I invoke CSV Bulkload for transactional table 
> CLI:
> hbase org.apache.phoenix.mapreduce.CsvBulkLoadTool --table   table> --input 
> {code:java}
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:117)
> Exception in thread "main" Global tag is not allowed: 
> tag:yaml.org,2002:org.apache.omid.tso.client.OmidClientConfiguration
>  in 'string', line 5, column 26:
>     omidClientConfiguration: !!org.apache.omid.tso.client.Omi ... 
>                              ^
>   at 
> org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:259)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:207)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:369)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:348)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:323)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:209)
>   at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:131)
>   at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:157)
>   at 
> org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
>   at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:493)
>   at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:473)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadStringAsMap(YAMLUtils.java:87)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadAsMap(YAMLUtils.java:75)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:62)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:45)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:71)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:58)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionClient(OmidTransactionProvider.java:72)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.initTransactionClient(ConnectionQueryServicesImpl.java:5907)
>   at 
> org.apache.phoenix.transaction.OmidTransactionContext.(OmidTransactionContext.java:60)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionContext(OmidTransactionProvider.java:65)
>   at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:390)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:613)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:547)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:447)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:232)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:210)
>   at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:177)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:537)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:302)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(Pho

[jira] [Resolved] (PHOENIX-7336) Upgrade org.iq80.snappy:snappy version to 0.5

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-7336.
--
Fix Version/s: 5.2.1
   5.3.0
   5.1.4
   Resolution: Fixed

Committed to all active branches.
Thanks for the review [~vjasani].

> Upgrade org.iq80.snappy:snappy version to 0.5
> -
>
> Key: PHOENIX-7336
> URL: https://issues.apache.org/jira/browse/PHOENIX-7336
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.1, 5.3.0, 5.1.4
>
>
> Based on dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7336) Upgrade org.iq80.snappy:snappy version to 0.5

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7336:
-
Description: Based on dependabot.

> Upgrade org.iq80.snappy:snappy version to 0.5
> -
>
> Key: PHOENIX-7336
> URL: https://issues.apache.org/jira/browse/PHOENIX-7336
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Based on dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Moved] (PHOENIX-7338) Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on transactional table

2024-06-19 Thread Rajeshbabu Chintaguntla (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla moved OMID-292 to PHOENIX-7338:
---

 Key: PHOENIX-7338  (was: OMID-292)
Workflow: no-reopen-closed, patch-avail  (was: jira)
 Project: Phoenix  (was: Phoenix Omid)

> Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on 
> transactional table
> 
>
> Key: PHOENIX-7338
> URL: https://issues.apache.org/jira/browse/PHOENIX-7338
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Nikita Pande
>Priority: Major
>
> I get the following error when I invoke CSV Bulkload for transactional table 
> CLI:
> hbase org.apache.phoenix.mapreduce.CsvBulkLoadTool --table   table> --input 
> {code:java}
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:117)
> Exception in thread "main" Global tag is not allowed: 
> tag:yaml.org,2002:org.apache.omid.tso.client.OmidClientConfiguration
>  in 'string', line 5, column 26:
>     omidClientConfiguration: !!org.apache.omid.tso.client.Omi ... 
>                              ^
>   at 
> org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:259)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:207)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:369)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:348)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:323)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:209)
>   at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:131)
>   at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:157)
>   at 
> org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
>   at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:493)
>   at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:473)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadStringAsMap(YAMLUtils.java:87)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadAsMap(YAMLUtils.java:75)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:62)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:45)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:71)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:58)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionClient(OmidTransactionProvider.java:72)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.initTransactionClient(ConnectionQueryServicesImpl.java:5907)
>   at 
> org.apache.phoenix.transaction.OmidTransactionContext.(OmidTransactionContext.java:60)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionContext(OmidTransactionProvider.java:65)
>   at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:390)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:613)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:547)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:447)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:232)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:210)
>   at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:177)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:537)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixS

[jira] [Updated] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal updated PHOENIX-7333:
---
Fix Version/s: 5.2.1

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: test
> Fix For: 5.2.1, 5.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7172) Support HBase 2.6

2024-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal updated PHOENIX-7172:
---
Fix Version/s: 5.2.1

> Support HBase 2.6
> -
>
> Key: PHOENIX-7172
> URL: https://issues.apache.org/jira/browse/PHOENIX-7172
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> HBase 2.6.0 release work is ongoing.
> Make sure Phoenix works with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7335) Bump Phoenix version to 5.2.1-SNAPSHOT

2024-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7335.

Fix Version/s: 5.2.1
 Assignee: Richárd Antal
   Resolution: Fixed

>  Bump Phoenix version to 5.2.1-SNAPSHOT
> ---
>
> Key: PHOENIX-7335
> URL: https://issues.apache.org/jira/browse/PHOENIX-7335
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.2.1
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (OMID-292) Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on transactional table

2024-06-19 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/OMID-292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17856263#comment-17856263
 ] 

Istvan Toth commented on OMID-292:
--

Please move this ticket to the PHOENIX project, and rename patch description in 
the PR, [~nikitapande].

> Phoenix CSV BulkloadTool fails with "Global tag is not allowed" error on 
> transactional table
> 
>
> Key: OMID-292
> URL: https://issues.apache.org/jira/browse/OMID-292
> Project: Phoenix Omid
>  Issue Type: Bug
>Reporter: Nikita Pande
>Priority: Major
>
> I get the following error when I invoke CSV Bulkload for transactional table 
> CLI:
> hbase org.apache.phoenix.mapreduce.CsvBulkLoadTool --table   table> --input 
> {code:java}
> org.apache.phoenix.mapreduce.CsvBulkLoadTool.main(CsvBulkLoadTool.java:117)
> Exception in thread "main" Global tag is not allowed: 
> tag:yaml.org,2002:org.apache.omid.tso.client.OmidClientConfiguration
>  in 'string', line 5, column 26:
>     omidClientConfiguration: !!org.apache.omid.tso.client.Omi ... 
>                              ^
>   at 
> org.yaml.snakeyaml.composer.Composer.composeSequenceNode(Composer.java:259)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:207)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeValueNode(Composer.java:369)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingChildren(Composer.java:348)
>   at 
> org.yaml.snakeyaml.composer.Composer.composeMappingNode(Composer.java:323)
>   at org.yaml.snakeyaml.composer.Composer.composeNode(Composer.java:209)
>   at org.yaml.snakeyaml.composer.Composer.getNode(Composer.java:131)
>   at org.yaml.snakeyaml.composer.Composer.getSingleNode(Composer.java:157)
>   at 
> org.yaml.snakeyaml.constructor.BaseConstructor.getSingleData(BaseConstructor.java:178)
>   at org.yaml.snakeyaml.Yaml.loadFromReader(Yaml.java:493)
>   at org.yaml.snakeyaml.Yaml.loadAs(Yaml.java:473)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadStringAsMap(YAMLUtils.java:87)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadAsMap(YAMLUtils.java:75)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:62)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.YAMLUtils.loadSettings(YAMLUtils.java:45)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:71)
>   at 
> org.apache.phoenix.shaded.org.apache.omid.transaction.HBaseOmidClientConfiguration.(HBaseOmidClientConfiguration.java:58)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionClient(OmidTransactionProvider.java:72)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.initTransactionClient(ConnectionQueryServicesImpl.java:5907)
>   at 
> org.apache.phoenix.transaction.OmidTransactionContext.(OmidTransactionContext.java:60)
>   at 
> org.apache.phoenix.transaction.OmidTransactionProvider.getTransactionContext(OmidTransactionProvider.java:65)
>   at 
> org.apache.phoenix.execute.MutationState.startTransaction(MutationState.java:390)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:613)
>   at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:547)
>   at 
> org.apache.phoenix.compile.FromCompiler$BaseColumnResolver.createTableRef(FromCompiler.java:777)
>   at 
> org.apache.phoenix.compile.FromCompiler$SingleTableColumnResolver.(FromCompiler.java:447)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:232)
>   at 
> org.apache.phoenix.compile.FromCompiler.getResolverForQuery(FromCompiler.java:210)
>   at org.apache.phoenix.util.ParseNodeUtil.rewrite(ParseNodeUtil.java:177)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:537)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableSelectStatement.compilePlan(PhoenixStatement.java:510)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:314)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:303)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:302)
>   at 
>

[jira] [Assigned] (PHOENIX-7337) Centralize and upgrade com.jayway.jsonpath:json-path version from 2.6.0 to 2.9.0

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7337:


Assignee: Istvan Toth

> Centralize and upgrade com.jayway.jsonpath:json-path version from 2.6.0 to 
> 2.9.0
> 
>
> Key: PHOENIX-7337
> URL: https://issues.apache.org/jira/browse/PHOENIX-7337
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Based on dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7337) Centralize and upgrade com.jayway.jsonpath:json-path version from 2.6.0 to 2.9.0

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7337:
-
Description: 
Based on dependabot.


> Centralize and upgrade com.jayway.jsonpath:json-path version from 2.6.0 to 
> 2.9.0
> 
>
> Key: PHOENIX-7337
> URL: https://issues.apache.org/jira/browse/PHOENIX-7337
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Major
>
> Based on dependabot.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7337) Centralize and upgrade com.jayway.jsonpath:json-path version from 2.6.0 to 2.9.0

2024-06-19 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7337:


 Summary: Centralize and upgrade com.jayway.jsonpath:json-path 
version from 2.6.0 to 2.9.0
 Key: PHOENIX-7337
 URL: https://issues.apache.org/jira/browse/PHOENIX-7337
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Istvan Toth






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7130:
-
Fix Version/s: 5.1.4

> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Affects Versions: 5.2.1, 5.3.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
> Fix For: 5.2.1, 5.3.0, 5.1.4
>
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7130:
-
Affects Version/s: 5.2.1
   5.3.0

> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Affects Versions: 5.2.1, 5.3.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7336) Upgrade org.iq80.snappy:snappy version to 0.5

2024-06-19 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7336:


 Summary: Upgrade org.iq80.snappy:snappy version to 0.5
 Key: PHOENIX-7336
 URL: https://issues.apache.org/jira/browse/PHOENIX-7336
 Project: Phoenix
  Issue Type: Bug
Reporter: Istvan Toth
Assignee: Istvan Toth






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7332) Upsert Select with Array Projections with Autocommit set to true fails

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri reassigned PHOENIX-7332:
-

Assignee: (was: Ranganath Govardhanagiri)

> Upsert Select with Array Projections with Autocommit set to true fails
> --
>
> Key: PHOENIX-7332
> URL: https://issues.apache.org/jira/browse/PHOENIX-7332
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> {code:java}
> @Test
> public void testServerArrayElementProjectionFailure() throws SQLException {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(true);
> String table = generateUniqueName();
> String ddl = "CREATE TABLE   " + table + "  (p INTEGER PRIMARY KEY, col1 
> INTEGER, arr1 INTEGER ARRAY, arr2 VARCHAR ARRAY)";
> conn.createStatement().execute(ddl);
> conn.close();
> conn = DriverManager.getConnection(getUrl(), props);
> PreparedStatement stmt = conn.prepareStatement("UPSERT INTO   " + table + "  
> VALUES (1,0, ARRAY[1, 2], ARRAY['a', 'b'])");
> stmt.execute();
> conn.commit();
> conn.close();
> conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(true);
> ResultSet rs;
> stmt = conn.prepareStatement("UPSERT INTO   " + table + "(p,col1) SELECT  p, 
> arr1[1] FROM   " + table);
> stmt.execute(); 
> }{code}
> Executing the above IT will fail with the below error.
> {code:java}
> java.lang.IllegalArgumentException: No ExpressionType for class 
> org.apache.phoenix.compile.ProjectionCompiler$ArrayIndexExpression
>   at 
> org.apache.phoenix.expression.ExpressionType.valueOf(ExpressionType.java:226) 
>at 
> org.apache.phoenix.coprocessorclient.UngroupedAggregateRegionObserverHelper.serialize(UngroupedAggregateRegionObserverHelper.java:45)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:805) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7331) Fix incompatibilities with HBASE-28644

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7331:


Assignee: Istvan Toth

> Fix incompatibilities with HBASE-28644
> --
>
> Key: PHOENIX-7331
> URL: https://issues.apache.org/jira/browse/PHOENIX-7331
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
>
> These are the errors:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile 
> (default-compile) on project phoenix-core-client: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[262,93]
>  incompatible types: java.util.List cannot be 
> converted to java.util.List
> [ERROR] 
> /home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/hbase/index/util/IndexManagementUtil.java:[248,69]
>  incompatible types: java.util.List cannot be 
> converted to java.util.List
> In IndexManagementUtil we can simply change the signature to cell.
> In PhoenixKeyValueUtil , we need to check for ExtendedCell, and clone it if 
> it is not.
> I'm pretty sure that there is already a utility method somewhere for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7333:


Assignee: Richárd Antal  (was: Istvan Toth)

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: test
> Fix For: 5.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7333:
-
Fix Version/s: 5.3.0

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
>  Labels: test
> Fix For: 5.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7334) Do not rebuild HBase in Jenkins for 2.5+ profile

2024-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal reassigned PHOENIX-7334:
--

Assignee: Istvan Toth

> Do not rebuild HBase in Jenkins for 2.5+ profile
> 
>
> Key: PHOENIX-7334
> URL: https://issues.apache.org/jira/browse/PHOENIX-7334
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
>
> We waste resources rebuiding Hbase 2.5+, when it's neither needed or used.
> We could handle this either in the Jenkins jobs or the rebuild script.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7334) Do not rebuild HBase in Jenkins for 2.5+ profile

2024-06-19 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7334.

Resolution: Fixed

> Do not rebuild HBase in Jenkins for 2.5+ profile
> 
>
> Key: PHOENIX-7334
> URL: https://issues.apache.org/jira/browse/PHOENIX-7334
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Priority: Minor
>
> We waste resources rebuiding Hbase 2.5+, when it's neither needed or used.
> We could handle this either in the Jenkins jobs or the rebuild script.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7332) Upsert Select with Array Projections with Autocommit set to true fails

2024-06-19 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri reassigned PHOENIX-7332:
-

Assignee: Ranganath Govardhanagiri

> Upsert Select with Array Projections with Autocommit set to true fails
> --
>
> Key: PHOENIX-7332
> URL: https://issues.apache.org/jira/browse/PHOENIX-7332
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> {code:java}
> @Test
> public void testServerArrayElementProjectionFailure() throws SQLException {
> Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
> Connection conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(true);
> String table = generateUniqueName();
> String ddl = "CREATE TABLE   " + table + "  (p INTEGER PRIMARY KEY, col1 
> INTEGER, arr1 INTEGER ARRAY, arr2 VARCHAR ARRAY)";
> conn.createStatement().execute(ddl);
> conn.close();
> conn = DriverManager.getConnection(getUrl(), props);
> PreparedStatement stmt = conn.prepareStatement("UPSERT INTO   " + table + "  
> VALUES (1,0, ARRAY[1, 2], ARRAY['a', 'b'])");
> stmt.execute();
> conn.commit();
> conn.close();
> conn = DriverManager.getConnection(getUrl(), props);
> conn.setAutoCommit(true);
> ResultSet rs;
> stmt = conn.prepareStatement("UPSERT INTO   " + table + "(p,col1) SELECT  p, 
> arr1[1] FROM   " + table);
> stmt.execute(); 
> }{code}
> Executing the above IT will fail with the below error.
> {code:java}
> java.lang.IllegalArgumentException: No ExpressionType for class 
> org.apache.phoenix.compile.ProjectionCompiler$ArrayIndexExpression
>   at 
> org.apache.phoenix.expression.ExpressionType.valueOf(ExpressionType.java:226) 
>at 
> org.apache.phoenix.coprocessorclient.UngroupedAggregateRegionObserverHelper.serialize(UngroupedAggregateRegionObserverHelper.java:45)
> at 
> org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:805) 
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7335) Bump Phoenix version to 5.2.1-SNAPSHOT

2024-06-19 Thread Jira
Richárd Antal created PHOENIX-7335:
--

 Summary:  Bump Phoenix version to 5.2.1-SNAPSHOT
 Key: PHOENIX-7335
 URL: https://issues.apache.org/jira/browse/PHOENIX-7335
 Project: Phoenix
  Issue Type: Task
Reporter: Richárd Antal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7334) Do not rebuild HBase in Jenkins for 2.5+ profile

2024-06-19 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7334:


 Summary: Do not rebuild HBase in Jenkins for 2.5+ profile
 Key: PHOENIX-7334
 URL: https://issues.apache.org/jira/browse/PHOENIX-7334
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Reporter: Istvan Toth


We waste resources rebuiding Hbase 2.5+, when it's neither needed or used.

We could handle this either in the Jenkins jobs or the rebuild script.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7333:


Assignee: Istvan Toth

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
>  Labels: test
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7333:
-
Labels: test  (was: )

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Priority: Minor
>  Labels: test
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-7333:
-
Priority: Minor  (was: Major)

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7333:


 Summary: Add HBase 2.6 profile to multibranch Jenkins job
 Key: PHOENIX-7333
 URL: https://issues.apache.org/jira/browse/PHOENIX-7333
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Reporter: Istvan Toth






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7332) Upsert Select with Array Projections with Autocommit set to true fails

2024-06-18 Thread Ranganath Govardhanagiri (Jira)
Ranganath Govardhanagiri created PHOENIX-7332:
-

 Summary: Upsert Select with Array Projections with Autocommit set 
to true fails
 Key: PHOENIX-7332
 URL: https://issues.apache.org/jira/browse/PHOENIX-7332
 Project: Phoenix
  Issue Type: Bug
Reporter: Ranganath Govardhanagiri
 Fix For: 5.3.0


{code:java}
@Test
public void testServerArrayElementProjectionFailure() throws SQLException {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
Connection conn = DriverManager.getConnection(getUrl(), props);
conn.setAutoCommit(true);
String table = generateUniqueName();
String ddl = "CREATE TABLE   " + table + "  (p INTEGER PRIMARY KEY, col1 
INTEGER, arr1 INTEGER ARRAY, arr2 VARCHAR ARRAY)";
conn.createStatement().execute(ddl);
conn.close();

conn = DriverManager.getConnection(getUrl(), props);
PreparedStatement stmt = conn.prepareStatement("UPSERT INTO   " + table + "  
VALUES (1,0, ARRAY[1, 2], ARRAY['a', 'b'])");
stmt.execute();
conn.commit();
conn.close();

conn = DriverManager.getConnection(getUrl(), props);
conn.setAutoCommit(true);
ResultSet rs;
stmt = conn.prepareStatement("UPSERT INTO   " + table + "(p,col1) SELECT  p, 
arr1[1] FROM   " + table);
stmt.execute(); 
}{code}
Executing the above IT will fail with the below error.
{code:java}
java.lang.IllegalArgumentException: No ExpressionType for class 
org.apache.phoenix.compile.ProjectionCompiler$ArrayIndexExpression
at 
org.apache.phoenix.expression.ExpressionType.valueOf(ExpressionType.java:226)   
 at 
org.apache.phoenix.coprocessorclient.UngroupedAggregateRegionObserverHelper.serialize(UngroupedAggregateRegionObserverHelper.java:45)
at 
org.apache.phoenix.compile.UpsertCompiler.compile(UpsertCompiler.java:805) 
{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7172) Support HBase 2.6

2024-06-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal updated PHOENIX-7172:
---
Fix Version/s: 5.3.0

> Support HBase 2.6
> -
>
> Key: PHOENIX-7172
> URL: https://issues.apache.org/jira/browse/PHOENIX-7172
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.3.0
>
>
> HBase 2.6.0 release work is ongoing.
> Make sure Phoenix works with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7172) Support HBase 2.6

2024-06-18 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7172.

Resolution: Fixed

> Support HBase 2.6
> -
>
> Key: PHOENIX-7172
> URL: https://issues.apache.org/jira/browse/PHOENIX-7172
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.3.0
>
>
> HBase 2.6.0 release work is ongoing.
> Make sure Phoenix works with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7001) Change Data Capture leveraging Max Lookback and Uncovered Indexes

2024-06-18 Thread Hari Krishna Dara (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Krishna Dara resolved PHOENIX-7001.

Release Note: 
Change Data Capture (CDC) is a feature designed to capture changes to tables or 
updatable views in near real-time. This new functionality supports various use 
cases, including:
* Real-Time Change Retrieval: Capture and retrieve changes as they happen or 
with minimal delay.
* Flexible Time Range Queries: Perform queries based on specific time ranges, 
typically short periods such as the last few minutes, hours, or the last few 
days.
* Comprehensive Change Tracking: Track all types of changes including 
insertions, updates, and deletions. Note that CDC does not differentiate 
between inserts and updates due to Phoenix’s handling of new versus existing 
rows.

Key features of the CDC include:
* Ordered Change Delivery: Changes are delivered in the order they arrive, 
ensuring the sequence of events is maintained.
* Streamlined Integration: Changes can be visualized and delivered to 
applications similarly to how Phoenix query results are retrieved, but with 
enhancements to support multiple results for each row and inclusion of deleted 
rows.
* Detailed Change Information: Optionally capture pre and post-change images of 
rows to provide a complete picture of modifications.

This enhancement empowers applications to maintain an accurate and timely 
reflection of database changes, supporting a wide array of real-time data 
processing and monitoring scenarios.
  Resolution: Fixed

> Change Data Capture leveraging Max Lookback and Uncovered Indexes
> -
>
> Key: PHOENIX-7001
> URL: https://issues.apache.org/jira/browse/PHOENIX-7001
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
>
> The use cases for a Change Data Capture (CDC) feature are centered around 
> capturing changes to a given table (or updatable view) as these changes 
> happen in near real-time. A CDC application can retrieve changes in real-time 
> or with some delay, or even retrieves the same set of changes multiple times. 
> This means the CDC use case can be generalized as time range queries where 
> the time range is typically short such as last x minutes or hours or 
> expressed as a specific time range in the last n days where n is typically 
> less than 7.
> A change is an update in a row. That is, a change is either updating one or 
> more columns of a table for a given row or deleting a row. It is desirable to 
> provide these changes in the order of their arrival. One can visualize the 
> delivery of these changes through a stream from a Phoenix table to the 
> application that is initiated by the application similar to the delivery of 
> any other Phoenix query results. The difference is that a regular query 
> result includes at most one result row for each row satisfying the query and 
> the deleted rows are not visible to the query result while the CDC 
> stream/result can include multiple result rows for each row and the result 
> includes deleted rows. Some use cases need to also get the pre and/or post 
> image of the row along with a change on the row. 
> The design proposed here leverages Phoenix Max Lookback and Uncovered Global 
> Indexes. The max lookback feature retains recent changes to a table, that is, 
> the changes that have been done in the last x days typically. This means that 
> the max lookback feature already captures the changes to a given table. 
> Currently, the max lookback age is configurable at the cluster level. We need 
> to extend this capability to be able to configure the max lookback age at the 
> table level so that each table can have a different max lookback age based on 
> its CDC application requirements.
> To deliver the changes in the order of their arrival, we need a time based 
> index. This index should be uncovered as the changes are already retained in 
> the table by the max lookback feature. The arrival time will be defined as 
> the mutation timestamp generated by the server. An uncovered index would 
> allow us to efficiently and orderly access to the changes. Changes to an 
> index table are also preserved by the max lookback feature.
> A CDC feature can be composed of the following components:
>  * {*}CDCUncoveredIndexRegionScanner{*}: This is a server side scanner on an 
> uncovered index used for CDC. This can inherit UncoveredIndexRegionScanner. 
> It goes through index table rows using a raw scan to identify data table rows 
> and retrieves these rows using a raw scan. Using the time range, it forms a 
> JSON blob to re

[jira] [Assigned] (PHOENIX-6066) MetaDataEndpointImpl.doGetTable should acquire a readLock instead of an exclusive writeLock on the table header row

2024-06-17 Thread Palash Chauhan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Palash Chauhan reassigned PHOENIX-6066:
---

Assignee: Palash Chauhan  (was: Lokesh Khurana)

> MetaDataEndpointImpl.doGetTable should acquire a readLock instead of an 
> exclusive writeLock on the table header row
> ---
>
> Key: PHOENIX-6066
> URL: https://issues.apache.org/jira/browse/PHOENIX-6066
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Palash Chauhan
>Priority: Major
>  Labels: quality-improvement
>
> Throughout MetaDataEndpointImpl, wherever we need to acquire a row lock we 
> call 
> [MetaDataEndpointImpl.acquireLock|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2377-L2386]
>  which gets an exclusive writeLock on the specified row [by 
> default|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2378].
> Thus, even operations like doGetTable/getSchema/getFunctions which are not 
> modifying the row will acquire a writeLock on these metadata rows when a 
> readLock should be sufficient (see [doGetTable 
> locking|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2932]
>  as an example). The problem with this is, even a simple UPSERT/DELETE or 
> SELECT query triggers a doGetTable (if the schema is not cached) and can 
> potentially block other DDLs and more importantly other queries since these 
> queries will wait until they can get a rowLock for the table header row. Even 
> seemingly unrelated operations like a CREATE VIEW AS SELECT * FROM T can 
> block a SELECT/UPSERT/DELETE on table T since the create view code needs to 
> fetch the schema of the parent table.
> Note that this is exacerbated in cases where we do server-server RPCs while 
> holding rowLocks for example 
> ([this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  and 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484])
>  which is another issue altogether.
> This Jira is to discuss the possibility of acquiring a readLock in these 
> "read metadata" paths to avoid blocking other "read metadata" requests 
> stemming from concurrent queries. The current behavior is potentially a perf 
> issue for clients that disable update-cache-frequency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7331) Fix incompatibilities with HBASE-28644

2024-06-16 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7331:


 Summary: Fix incompatibilities with HBASE-28644
 Key: PHOENIX-7331
 URL: https://issues.apache.org/jira/browse/PHOENIX-7331
 Project: Phoenix
  Issue Type: Bug
  Components: core
Reporter: Istvan Toth


These are the errors:

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile (default-compile) 
on project phoenix-core-client: Compilation failure: Compilation failure:
[ERROR] 
/home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[262,93]
 incompatible types: java.util.List cannot be 
converted to java.util.List
[ERROR] 
/home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/hbase/index/util/IndexManagementUtil.java:[248,69]
 incompatible types: java.util.List cannot be 
converted to java.util.List


In IndexManagementUtil we can simply change the signature to cell.

In PhoenixKeyValueUtil , we need to check for ExtendedCell, and clone it if it 
is not.
I'm pretty sure that there is already a utility method somewhere for this.




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7329) Change TTL column type to VARCHAR in syscat

2024-06-14 Thread Stephen Yuan Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated PHOENIX-7329:

Affects Version/s: 5.3.0

> Change TTL column type to VARCHAR in syscat
> ---
>
> Key: PHOENIX-7329
> URL: https://issues.apache.org/jira/browse/PHOENIX-7329
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.3.0
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
>
> PHOENIX-7170 proposes expressing TTL as conditional expressions which will be 
> evaluated on the rows. Changing the data type of TTL column in syscat will 
> allow us to store those expressions or the absolute TTL value in the same 
> column.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7328) Fix flapping ConcurrentMutationsExtendedIT#testConcurrentUpserts

2024-06-14 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir resolved PHOENIX-7328.

Fix Version/s: 5.2.1
   5.3.0
   Resolution: Fixed

> Fix flapping ConcurrentMutationsExtendedIT#testConcurrentUpserts
> 
>
> Key: PHOENIX-7328
> URL: https://issues.apache.org/jira/browse/PHOENIX-7328
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> ConcurrentMutationsExtendedIT#testConcurrentUpserts has been failing time to 
> time. This jira is to find the root cause and fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-2341) Rename in ALTER statement

2024-06-13 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-2341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-2341.
--
Resolution: Duplicate

> Rename in ALTER statement
> -
>
> Key: PHOENIX-2341
> URL: https://issues.apache.org/jira/browse/PHOENIX-2341
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: alex kamil
>Priority: Minor
>  Labels: newbie
>
> Add RENAME functionality in ALTER statement (e.g. similar to PostgreSQL): 
> ALTER TABLE name 
> RENAME  column TO new_column
> ALTER TABLE name
> RENAME TO new_name
> ALTER TABLE name
> SET SCHEMA new_schema
> Reference: http://www.postgresql.org/docs/9.1/static/sql-altertable.html
> Related: PHOENIX-152, PHOENIX-1598 , PHOENIX-1940



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7072) Implement json_modify function on the json object as Atomic Upserts

2024-06-12 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri resolved PHOENIX-7072.
---
Resolution: Fixed

> Implement json_modify function on the json object as Atomic Upserts
> ---
>
> Key: PHOENIX-7072
> URL: https://issues.apache.org/jira/browse/PHOENIX-7072
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> Implement JSON_MODIFY as per the attached design spec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7072) Implement json_modify function on the json object as Atomic Upserts

2024-06-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7072:
--
Fix Version/s: 5.3.0

> Implement json_modify function on the json object as Atomic Upserts
> ---
>
> Key: PHOENIX-7072
> URL: https://issues.apache.org/jira/browse/PHOENIX-7072
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> Implement JSON_MODIFY as per the attached design spec.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7330) Introducing Binary JSON (BSON) with Complex Document structures in Phoenix

2024-06-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7330:
--
Description: 
The purpose of this Jira is to introduce new data type in Phoenix: Binary JSON 
(BSON) to manage more complex document data structures in Phoenix.

BSON or Binary JSON is a Binary-Encoded serialization of JSON-like documents. 
BSON data type is specifically used for users to store, update and query part 
or whole of the BsonDocument in the most performant way without having to 
serialize/deserialize the document to/from binary format. Bson allows 
deserializing only part of the nested documents such that querying or indexing 
any attributes within the nested structure becomes more efficient and 
performant as the deserialization happens at runtime. Any other document 
structure would require deserializing the binary into the document, and then 
perform the query.

BSONSpec: [https://bsonspec.org/]

JSON and BSON are closely related by design. BSON serves as a binary 
representation of JSON data, tailored with specialized extensions for wider 
application scenarios, and finely tuned for efficient data storage and 
traversal. Similar to JSON, BSON facilitates the embedding of objects and 
arrays.

One particular way in which BSON differs from JSON is in its support for some 
more advanced data types. For instance, JSON does not differentiate between 
integers (round numbers), and floating-point numbers (with decimal precision). 
BSON does distinguish between the two and store them in the corresponding BSON 
data type (e.g. BsonInt32 vs BsonDouble). Many server-side programming 
languages offer advanced numeric data types (standards include integer, regular 
precision floating point number i.e. “float”, double-precision floating point 
i.e. “double”, and boolean values), each with its own optimal usage for 
efficient mathematical operations.

Another key distinction between BSON and JSON is that BSON documents have the 
capability to include Date or Binary objects, which cannot be directly 
represented in pure JSON format. BSON also provides the ability to store and 
retrieve user defined Binary objects. Likewise, by integrating advanced data 
structures like Sets into BSON documents, we can significantly enhance the 
capabilities of Phoenix for storing, retrieving, and updating Binary, Sets, 
Lists, and Documents as nested or complex data types.

Moreover, JSON format is human as well as machine readable, whereas BSON format 
is only machine readable. Hence, as part of introducing BSON data type, we also 
need to provide a user interface such that users can provide human readable 
JSON as input for BSON datatype.

This Jira also introduces access and update functions for BSON documents.

BSON_CONDITION_EXPRESSION can evaluate condition expression on the document 
fields, similar to how WHERE clause evaluates condition expression on various 
columns of the given row(s) for the relational tables.

BSON_UPDATE_EXPRESSION can perform one or more document field updates similar 
to how UPSERT statements can perform update to one or more columns of the given 
row(s) for the relational tables.

 

Phoenix can introduce more complex data structures like sets of scalar types, 
in addition to the nested documents and nested arrays provided by BSON.

Overall, by combining various functionalities available in Phoenix like 
secondary indexes, conditional updates, high throughput read/write with BSON, 
we can evolve Phoenix into highly scalable Document Database.

  was:
The purpose of this Jira is to introduce new data type in Phoenix: Binary JSON 
(BSON) to manage more complex document data structures in Phoenix.

BSON or Binary JSON is a Binary-Encoded serialization of JSON-like documents. 
BSON data type is specifically used for users to store, update and query part 
or whole of the BsonDocument in the most performant way without having to 
serialize/deserialize the document to/from binary format. Bson allows 
deserializing only part of the nested documents such that querying or indexing 
any attributes within the nested structure becomes more efficient and 
performant as the deserialization happens at runtime. Any other document 
structure would require deserializing the binary into the document, and then 
perform the query.

BSONSpec: [https://bsonspec.org/]

JSON and BSON are closely related by design. BSON serves as a binary 
representation of JSON data, tailored with specialized extensions for wider 
application scenarios, and finely tuned for efficient data storage and 
traversal. Similar to JSON, BSON facilitates the embedding of objects and 
arrays.

 

One particular way in which BSON differs from JSON is in its support for some 
more advanced data types. For instance, JSON does not differentiate between 
integers (round numbers), and floating-point numbers (with decimal precision). 
BSON does

[jira] [Assigned] (PHOENIX-7330) Introducing Binary JSON (BSON) with Complex Document structures in Phoenix

2024-06-12 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-7330:
-

Assignee: Viraj Jasani

> Introducing Binary JSON (BSON) with Complex Document structures in Phoenix
> --
>
> Key: PHOENIX-7330
> URL: https://issues.apache.org/jira/browse/PHOENIX-7330
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> The purpose of this Jira is to introduce new data type in Phoenix: Binary 
> JSON (BSON) to manage more complex document data structures in Phoenix.
> BSON or Binary JSON is a Binary-Encoded serialization of JSON-like documents. 
> BSON data type is specifically used for users to store, update and query part 
> or whole of the BsonDocument in the most performant way without having to 
> serialize/deserialize the document to/from binary format. Bson allows 
> deserializing only part of the nested documents such that querying or 
> indexing any attributes within the nested structure becomes more efficient 
> and performant as the deserialization happens at runtime. Any other document 
> structure would require deserializing the binary into the document, and then 
> perform the query.
> BSONSpec: [https://bsonspec.org/]
> JSON and BSON are closely related by design. BSON serves as a binary 
> representation of JSON data, tailored with specialized extensions for wider 
> application scenarios, and finely tuned for efficient data storage and 
> traversal. Similar to JSON, BSON facilitates the embedding of objects and 
> arrays.
>  
> One particular way in which BSON differs from JSON is in its support for some 
> more advanced data types. For instance, JSON does not differentiate between 
> integers (round numbers), and floating-point numbers (with decimal 
> precision). BSON does distinguish between the two and store them in the 
> corresponding BSON data type (e.g. BsonInt32 vs BsonDouble). Many server-side 
> programming languages offer advanced numeric data types (standards include 
> integer, regular precision floating point number i.e. “float”, 
> double-precision floating point i.e. “double”, and boolean values), each with 
> its own optimal usage for efficient mathematical operations.
> Another key distinction between BSON and JSON is that BSON documents have the 
> capability to include Date or Binary objects, which cannot be directly 
> represented in pure JSON format. BSON also provides the ability to store and 
> retrieve user defined Binary objects. Likewise, by integrating advanced data 
> structures like Sets into BSON documents, we can significantly enhance the 
> capabilities of Phoenix for storing, retrieving, and updating Binary, Sets, 
> Lists, and Documents as nested or complex data types.
> Moreover, JSON format is human as well as machine readable, whereas BSON 
> format is only machine readable. Hence, as part of introducing BSON data 
> type, we also need to provide a user interface such that users can provide 
> human readable JSON as input for BSON datatype.
> This Jira also introduces access and update functions for BSON documents.
> BSON_CONDITION_EXPRESSION can evaluate condition expression on the document 
> fields, similar to how WHERE clause evaluates condition expression on various 
> columns of the given row(s) for the relational tables.
> BSON_UPDATE_EXPRESSION can perform one or more document field updates similar 
> to how UPSERT statements can perform update to one or more columns of the 
> given row(s) for the relational tables.
> Overall, by combining various functionalities available in Phoenix like 
> secondary indexes, conditional updates, high throughput read/write with BSON, 
> we can evolve Phoenix into highly scalable Document Database.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7330) Introducing Binary JSON (BSON) with Complex Document structures in Phoenix

2024-06-12 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-7330:
-

 Summary: Introducing Binary JSON (BSON) with Complex Document 
structures in Phoenix
 Key: PHOENIX-7330
 URL: https://issues.apache.org/jira/browse/PHOENIX-7330
 Project: Phoenix
  Issue Type: New Feature
Reporter: Viraj Jasani


The purpose of this Jira is to introduce new data type in Phoenix: Binary JSON 
(BSON) to manage more complex document data structures in Phoenix.

BSON or Binary JSON is a Binary-Encoded serialization of JSON-like documents. 
BSON data type is specifically used for users to store, update and query part 
or whole of the BsonDocument in the most performant way without having to 
serialize/deserialize the document to/from binary format. Bson allows 
deserializing only part of the nested documents such that querying or indexing 
any attributes within the nested structure becomes more efficient and 
performant as the deserialization happens at runtime. Any other document 
structure would require deserializing the binary into the document, and then 
perform the query.

BSONSpec: [https://bsonspec.org/]

JSON and BSON are closely related by design. BSON serves as a binary 
representation of JSON data, tailored with specialized extensions for wider 
application scenarios, and finely tuned for efficient data storage and 
traversal. Similar to JSON, BSON facilitates the embedding of objects and 
arrays.

 

One particular way in which BSON differs from JSON is in its support for some 
more advanced data types. For instance, JSON does not differentiate between 
integers (round numbers), and floating-point numbers (with decimal precision). 
BSON does distinguish between the two and store them in the corresponding BSON 
data type (e.g. BsonInt32 vs BsonDouble). Many server-side programming 
languages offer advanced numeric data types (standards include integer, regular 
precision floating point number i.e. “float”, double-precision floating point 
i.e. “double”, and boolean values), each with its own optimal usage for 
efficient mathematical operations.

Another key distinction between BSON and JSON is that BSON documents have the 
capability to include Date or Binary objects, which cannot be directly 
represented in pure JSON format. BSON also provides the ability to store and 
retrieve user defined Binary objects. Likewise, by integrating advanced data 
structures like Sets into BSON documents, we can significantly enhance the 
capabilities of Phoenix for storing, retrieving, and updating Binary, Sets, 
Lists, and Documents as nested or complex data types.

Moreover, JSON format is human as well as machine readable, whereas BSON format 
is only machine readable. Hence, as part of introducing BSON data type, we also 
need to provide a user interface such that users can provide human readable 
JSON as input for BSON datatype.

This Jira also introduces access and update functions for BSON documents.

BSON_CONDITION_EXPRESSION can evaluate condition expression on the document 
fields, similar to how WHERE clause evaluates condition expression on various 
columns of the given row(s) for the relational tables.

BSON_UPDATE_EXPRESSION can perform one or more document field updates similar 
to how UPSERT statements can perform update to one or more columns of the given 
row(s) for the relational tables.

Overall, by combining various functionalities available in Phoenix like 
secondary indexes, conditional updates, high throughput read/write with BSON, 
we can evolve Phoenix into highly scalable Document Database.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7329) Change TTL column type to VARCHAR in syscat

2024-06-12 Thread Tanuj Khurana (Jira)
Tanuj Khurana created PHOENIX-7329:
--

 Summary: Change TTL column type to VARCHAR in syscat
 Key: PHOENIX-7329
 URL: https://issues.apache.org/jira/browse/PHOENIX-7329
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Tanuj Khurana
Assignee: Tanuj Khurana


PHOENIX-7170 proposes expressing TTL as conditional expressions which will be 
evaluated on the rows. Changing the data type of TTL column in syscat will 
allow us to store those expressions or the absolute TTL value in the same 
column.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7328) Fix flapping ConcurrentMutationsExtendedIT#testConcurrentUpserts

2024-06-10 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir reassigned PHOENIX-7328:
--

Assignee: Kadir Ozdemir

> Fix flapping ConcurrentMutationsExtendedIT#testConcurrentUpserts
> 
>
> Key: PHOENIX-7328
> URL: https://issues.apache.org/jira/browse/PHOENIX-7328
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> ConcurrentMutationsExtendedIT#testConcurrentUpserts has been failing time to 
> time. This jira is to find the root cause and fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7328) Fix flapping ConcurrentMutationsExtendedIT#testConcurrentUpserts

2024-06-10 Thread Kadir Ozdemir (Jira)
Kadir Ozdemir created PHOENIX-7328:
--

 Summary: Fix flapping 
ConcurrentMutationsExtendedIT#testConcurrentUpserts
 Key: PHOENIX-7328
 URL: https://issues.apache.org/jira/browse/PHOENIX-7328
 Project: Phoenix
  Issue Type: Bug
Reporter: Kadir Ozdemir


ConcurrentMutationsExtendedIT#testConcurrentUpserts has been failing time to 
time. This jira is to find the root cause and fix it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7327) Bump phoenixdb version to 1.2.3.dev0 after release

2024-06-10 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7327.

Resolution: Fixed

> Bump phoenixdb version to 1.2.3.dev0 after release
> --
>
> Key: PHOENIX-7327
> URL: https://issues.apache.org/jira/browse/PHOENIX-7327
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7326) Simplify LockManager and make it more efficient

2024-06-09 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir updated PHOENIX-7326:
---
Fix Version/s: 5.2.1
   5.3.0

> Simplify LockManager and make it more efficient
> ---
>
> Key: PHOENIX-7326
> URL: https://issues.apache.org/jira/browse/PHOENIX-7326
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> Phoenix needs to manage its own row locking for secondary indexes. 
> LockManager provides this locking. The implementation of row locking was 
> originally copied for the most part from HRegion.getRowLockInternal 
> implementation. However, the current implementation is complicated. The 
> implementation can be simplified and its efficiency can be improved. Also the 
> correctness of LockManager will be easier to ensure with the simplified 
> implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7326) Simplify LockManager and make it more efficient

2024-06-09 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir resolved PHOENIX-7326.

Resolution: Fixed

> Simplify LockManager and make it more efficient
> ---
>
> Key: PHOENIX-7326
> URL: https://issues.apache.org/jira/browse/PHOENIX-7326
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> Phoenix needs to manage its own row locking for secondary indexes. 
> LockManager provides this locking. The implementation of row locking was 
> originally copied for the most part from HRegion.getRowLockInternal 
> implementation. However, the current implementation is complicated. The 
> implementation can be simplified and its efficiency can be improved. Also the 
> correctness of LockManager will be easier to ensure with the simplified 
> implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7314) Enable CompactionScanner for flushes and minor compaction

2024-06-07 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir resolved PHOENIX-7314.

Fix Version/s: 5.2.1
   5.3.0
   Resolution: Fixed

> Enable CompactionScanner for flushes and minor compaction
> -
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> Phoenix TTL is currently used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL, more 
> specifically CompactionScanner, for flushes and minor compaction to remove 
> the live or deleted cell versions beyond the max lookback window during 
> flushes and minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7312) Release PhoenixDB 1.2.2

2024-06-07 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7312.

Resolution: Fixed

phoenixdb1.2.2 has been released, I sent out the announcement too.

> Release PhoenixDB 1.2.2
> ---
>
> Key: PHOENIX-7312
> URL: https://issues.apache.org/jira/browse/PHOENIX-7312
> Project: Phoenix
>  Issue Type: Task
>  Components: python, queryserver
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Major
>
> The last release was in 2022, and there are several important unreleased 
> fixes.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7327) Bump phoenixdb version to 1.2.3.dev0 after release

2024-06-07 Thread Jira
Richárd Antal created PHOENIX-7327:
--

 Summary: Bump phoenixdb version to 1.2.3.dev0 after release
 Key: PHOENIX-7327
 URL: https://issues.apache.org/jira/browse/PHOENIX-7327
 Project: Phoenix
  Issue Type: Task
Reporter: Richárd Antal
Assignee: Richárd Antal






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7192) IDE shows errors on JSON comment

2024-06-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-7192.
---
Resolution: Fixed

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7192) IDE shows errors on JSON comment

2024-06-06 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-7192:
--
Fix Version/s: 5.3.0

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7326) Simplify LockManager and make it more efficient

2024-06-06 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir reassigned PHOENIX-7326:
--

Assignee: Kadir Ozdemir

> Simplify LockManager and make it more efficient
> ---
>
> Key: PHOENIX-7326
> URL: https://issues.apache.org/jira/browse/PHOENIX-7326
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> Phoenix needs to manage its own row locking for secondary indexes. 
> LockManager provides this locking. The implementation of row locking was 
> originally copied for the most part from HRegion.getRowLockInternal 
> implementation. However, the current implementation is complicated. The 
> implementation can be simplified and its efficiency can be improved. Also the 
> correctness of LockManager will be easier to ensure with the simplified 
> implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7326) Simplify LockManager and make it more efficient

2024-06-06 Thread Kadir Ozdemir (Jira)
Kadir Ozdemir created PHOENIX-7326:
--

 Summary: Simplify LockManager and make it more efficient
 Key: PHOENIX-7326
 URL: https://issues.apache.org/jira/browse/PHOENIX-7326
 Project: Phoenix
  Issue Type: Improvement
Reporter: Kadir Ozdemir


Phoenix needs to manage its own row locking for secondary indexes. LockManager 
provides this locking. The implementation of row locking was originally copied 
for the most part from HRegion.getRowLockInternal implementation. However, the 
current implementation is complicated. The implementation can be simplified and 
its efficiency can be improved. Also the correctness of LockManager will be 
easier to ensure with the simplified implementation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7325) Connectors does not create source jars

2024-06-05 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7325:


 Summary: Connectors does not create source jars
 Key: PHOENIX-7325
 URL: https://issues.apache.org/jira/browse/PHOENIX-7325
 Project: Phoenix
  Issue Type: Improvement
  Components: connectors
Reporter: Istvan Toth


When connectors is built, source jars are not built for at least for some of 
the packages .
Make sure that all packages which have Java code generate a source jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6960) Scan range is wrong when query desc columns

2024-06-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-6960:
-

Assignee: Viraj Jasani  (was: Jing Yu)

> Scan range is wrong when query desc columns
> ---
>
> Key: PHOENIX-6960
> URL: https://issues.apache.org/jira/browse/PHOENIX-6960
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.3
>Reporter: fanartoria
>Assignee: Viraj Jasani
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Ways to reproduce
> {code}
> 0: jdbc:phoenix:> create table sts(id integer primary key, name varchar, type 
> integer, status integer);
> No rows affected (1.259 seconds)
> 0: jdbc:phoenix:> create index sts_name_desc on sts(status, type desc, name 
> desc);
> ^[[ANo rows affected (6.376 seconds)
> 0: jdbc:phoenix:> create index sts_name_asc on sts(type desc, name) include 
> (status);
> No rows affected (6.377 seconds)
> 0: jdbc:phoenix:> upsert into sts values(1, 'test10.txt', 1, 1);
> 1 row affected (0.026 seconds)
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:>
> 0: jdbc:phoenix:> explain select * from sts where type = 1 and name like 
> 'test10%';
> +--++---+-+
> | PLAN
>  | EST_BYTES_READ | EST_ROWS_READ | EST_INFO_TS |
> +--++---+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_ASC 
> [~1,'test10'] - [~1,'test11'] | null   | null  | null|
> +--++---+-+
> 1 row selected (0.023 seconds)
> 0: jdbc:phoenix:> select * from sts where type = 1 and name like 'test10%';
> +++--++
> | ID |NAME| TYPE | STATUS |
> +++--++
> | 1  | test10.txt | 1| 1  |
> +++--++
> 1 row selected (0.033 seconds)
> 0: jdbc:phoenix:> explain select * from sts where status = 1 and type = 1 and 
> name like 'test10%';
> +-++---+-+
> |PLAN 
> | EST_BYTES_READ | EST_ROWS_READ | 
> EST_INFO_TS |
> +-++---+-+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER STS_NAME_DESC 
> [1,~1,~'test10'] - [1,~1,~'test1/'] | null   | null  | null   
>  |
> | SERVER FILTER BY FIRST KEY ONLY AND "NAME" LIKE 'test10%'   
> | null   | null  | null   
>  |
> +-++---+-+
> 2 rows selected (0.022 seconds)
> 0: jdbc:phoenix:> select * from sts where status = 1 and type = 1 and name 
> like 'test10%';
> ++--+--++
> | ID | NAME | TYPE | STATUS |
> ++--+--++
> ++--+--++
> No rows selected (0.04 seconds)
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7323) fix twine check error and add a steps to Releasing guide

2024-06-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal updated PHOENIX-7323:
---
Summary: fix twine check error and add a steps to Releasing guide   (was: 
fix twine check error)

> fix twine check error and add a steps to Releasing guide 
> -
>
> Key: PHOENIX-7323
> URL: https://issues.apache.org/jira/browse/PHOENIX-7323
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: python-phoenixdb-1.2.2
>
>
> twine check fails with 
> {code:java}
> ERROR    `long_description` has syntax errors in markup and would not be 
> rendered on PyPI.
>          line 36: Error: Unexpected indentation.{code}
> Also add steps to RELEASING.rst to upload package to test-PYPI to check these 
> while preparing the RC
> [https://kynan.github.io/blog/2020/05/23/how-to-upload-your-package-to-the-python-package-index-pypi-test-server]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7323) fix twine check error

2024-06-04 Thread Jira


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richárd Antal resolved PHOENIX-7323.

Fix Version/s: python-phoenixdb-1.2.2
 Assignee: Richárd Antal
   Resolution: Fixed

> fix twine check error
> -
>
> Key: PHOENIX-7323
> URL: https://issues.apache.org/jira/browse/PHOENIX-7323
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: python-phoenixdb-1.2.2
>
>
> twine check fails with 
> {code:java}
> ERROR    `long_description` has syntax errors in markup and would not be 
> rendered on PyPI.
>          line 36: Error: Unexpected indentation.{code}
> Also add steps to RELEASING.rst to upload package to test-PYPI to check these 
> while preparing the RC
> [https://kynan.github.io/blog/2020/05/23/how-to-upload-your-package-to-the-python-package-index-pypi-test-server]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7324) Run twine from tox for phoenixdb

2024-06-03 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7324:


 Summary: Run twine from tox for phoenixdb
 Key: PHOENIX-7324
 URL: https://issues.apache.org/jira/browse/PHOENIX-7324
 Project: Phoenix
  Issue Type: Improvement
  Components: python, queryserver
Reporter: Istvan Toth


We should add twine to the automated tests in tox to catch packaging issues.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7323) fix twine check error

2024-06-03 Thread Jira
Richárd Antal created PHOENIX-7323:
--

 Summary: fix twine check error
 Key: PHOENIX-7323
 URL: https://issues.apache.org/jira/browse/PHOENIX-7323
 Project: Phoenix
  Issue Type: Bug
  Components: python, queryserver
Reporter: Richárd Antal


twine check fails with 
{code:java}
ERROR    `long_description` has syntax errors in markup and would not be 
rendered on PyPI.
         line 36: Error: Unexpected indentation.{code}

Also add steps to RELEASING.rst to upload package to test-PYPI to check these 
while preparing the RC
[https://kynan.github.io/blog/2020/05/23/how-to-upload-your-package-to-the-python-package-index-pypi-test-server]
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7322) Make sure that filterAllRemaining() is not called from filterRowKey()

2024-06-02 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7322:


 Summary: Make sure that filterAllRemaining() is not called from 
filterRowKey()
 Key: PHOENIX-7322
 URL: https://issues.apache.org/jira/browse/PHOENIX-7322
 Project: Phoenix
  Issue Type: Improvement
Reporter: Istvan Toth


Many of the current filters call filterAllRemaining() from filterRowKey().

This should not be necessary, as in the normal (RS) code path, filterRowKey() 
is only called AFTER filterAllReamaining()  has returned false.

Well-written filters do cache their filterAllReamaining() status, so this is 
not very expensive, but we could still save a few cycles for each cell.

* Change the filter API definition to explicitly state this
* Fix the code where this is not true. At first glance, 
org.apache.hadoop.hbase.mapreduce.Import seems to be one place that does not 
confirm to this behaviour.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7321) Rename PhoenixIndexBuilderHelper to something different as it seems to handle only onDupKey logic

2024-05-30 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7321:
--
Summary: Rename PhoenixIndexBuilderHelper to something different as it 
seems to handle only onDupKey logic  (was: Rename PhoenixIndexBuilderHelper to 
something different as it seems handle only onDupKey only)

> Rename PhoenixIndexBuilderHelper to something different as it seems to handle 
> only onDupKey logic
> -
>
> Key: PHOENIX-7321
> URL: https://issues.apache.org/jira/browse/PHOENIX-7321
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
> Fix For: 5.3.0
>
>
> While looking at 
> [PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
>  at master · apache/phoenix 
> (github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
>  I see the class only deals with On Duplicate Key related functionality. May 
> be we need to rename this to OnDuplicateKeyHelper or something of that sort 
> as it currently doesn't deal with any Index help.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7321) Rename PhoenixIndexBuilderHelper to something different as it seems handle only onDupKey only

2024-05-30 Thread Ranganath Govardhanagiri (Jira)
Ranganath Govardhanagiri created PHOENIX-7321:
-

 Summary: Rename PhoenixIndexBuilderHelper to something different 
as it seems handle only onDupKey only
 Key: PHOENIX-7321
 URL: https://issues.apache.org/jira/browse/PHOENIX-7321
 Project: Phoenix
  Issue Type: Task
  Components: core
Reporter: Ranganath Govardhanagiri
Assignee: Ranganath Govardhanagiri
 Fix For: 5.3.0


While looking at 
[PhoenixIndexBuilderHelper|[phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java
 at master · apache/phoenix 
(github.com)|https://github.com/apache/phoenix/blob/master/phoenix-core-client/src/main/java/org/apache/phoenix/index/PhoenixIndexBuilderHelper.java]]
 I see the class only deals with On Duplicate Key related functionality. May be 
we need to rename this to OnDuplicateKeyHelper or something of that sort as it 
currently doesn't deal with any Index help.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7320) Upgrade HBase 2.4 to 2.4.18

2024-05-30 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth reassigned PHOENIX-7320:


Assignee: Istvan Toth

> Upgrade HBase 2.4 to 2.4.18
> ---
>
> Key: PHOENIX-7320
> URL: https://issues.apache.org/jira/browse/PHOENIX-7320
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> 2.4.18 has just been released.
> Update Phoenix to build with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6066) MetaDataEndpointImpl.doGetTable should acquire a readLock instead of an exclusive writeLock on the table header row

2024-05-30 Thread Stephen Yuan Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Yuan Jiang updated PHOENIX-6066:

Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)

> MetaDataEndpointImpl.doGetTable should acquire a readLock instead of an 
> exclusive writeLock on the table header row
> ---
>
> Key: PHOENIX-6066
> URL: https://issues.apache.org/jira/browse/PHOENIX-6066
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Lokesh Khurana
>Priority: Major
>  Labels: quality-improvement
>
> Throughout MetaDataEndpointImpl, wherever we need to acquire a row lock we 
> call 
> [MetaDataEndpointImpl.acquireLock|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2377-L2386]
>  which gets an exclusive writeLock on the specified row [by 
> default|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2378].
> Thus, even operations like doGetTable/getSchema/getFunctions which are not 
> modifying the row will acquire a writeLock on these metadata rows when a 
> readLock should be sufficient (see [doGetTable 
> locking|https://github.com/apache/phoenix/blob/bba7d59f81f2b91342fa5a7ee213170739573d6a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2932]
>  as an example). The problem with this is, even a simple UPSERT/DELETE or 
> SELECT query triggers a doGetTable (if the schema is not cached) and can 
> potentially block other DDLs and more importantly other queries since these 
> queries will wait until they can get a rowLock for the table header row. Even 
> seemingly unrelated operations like a CREATE VIEW AS SELECT * FROM T can 
> block a SELECT/UPSERT/DELETE on table T since the create view code needs to 
> fetch the schema of the parent table.
> Note that this is exacerbated in cases where we do server-server RPCs while 
> holding rowLocks for example 
> ([this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2459-L2461]
>  and 
> [this|https://github.com/apache/phoenix/blob/1d844950bb4ec8221873ecd2b094c20f427cd984/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2479-L2484])
>  which is another issue altogether.
> This Jira is to discuss the possibility of acquiring a readLock in these 
> "read metadata" paths to avoid blocking other "read metadata" requests 
> stemming from concurrent queries. The current behavior is potentially a perf 
> issue for clients that disable update-cache-frequency.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-7319) Leverage Bloom Filters to improve performance on write path

2024-05-30 Thread Tanuj Khurana (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tanuj Khurana resolved PHOENIX-7319.

Resolution: Fixed

> Leverage Bloom Filters to improve performance on write path
> ---
>
> Key: PHOENIX-7319
> URL: https://issues.apache.org/jira/browse/PHOENIX-7319
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Tanuj Khurana
>Assignee: Tanuj Khurana
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> On the write path if the write is an atomic upsert or if the table has one or 
> more indexes Phoenix first does a read. All these reads on the data table are 
> point lookups. Bloom Filters can help optimize the performance of these 
> lookups. 
>  * For new rows (inserts), the point lookup will not return any result. This 
> negative lookup is ideal for bloom filters as our read can return by just 
> checking the bloom filter block.
>  * For updates, since new updates get accumulated into memstore and then 
> flushed into new store files. A region can have multiple store files and when 
> doing a read we have to read multiple store files. Bloom filter can help 
> eliminate which store files should be read.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7320) Upgrade HBase 2.4 to 2.4.18

2024-05-28 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-7320:


 Summary: Upgrade HBase 2.4 to 2.4.18
 Key: PHOENIX-7320
 URL: https://issues.apache.org/jira/browse/PHOENIX-7320
 Project: Phoenix
  Issue Type: Improvement
  Components: core
Reporter: Istvan Toth


2.4.18 has just been released.
Update Phoenix to build with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-7319) Leverage Bloom Filters to improve performance on write path

2024-05-27 Thread Tanuj Khurana (Jira)
Tanuj Khurana created PHOENIX-7319:
--

 Summary: Leverage Bloom Filters to improve performance on write 
path
 Key: PHOENIX-7319
 URL: https://issues.apache.org/jira/browse/PHOENIX-7319
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.2.0
Reporter: Tanuj Khurana
Assignee: Tanuj Khurana
 Fix For: 5.2.1, 5.3.0


On the write path if the write is an atomic upsert or if the table has one or 
more indexes Phoenix first does a read. All these reads on the data table are 
point lookups. Bloom Filters can help optimize the performance of these 
lookups. 
 * For new rows (inserts), the point lookup will not return any result. This 
negative lookup is ideal for bloom filters as our read can return by just 
checking the bloom filter block.
 * For updates, since new updates get accumulated into memstore and then 
flushed into new store files. A region can have multiple store files and when 
doing a read we have to read multiple store files. Bloom filter can help 
eliminate which store files should be read.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7314) Enable CompactionScanner for flushes and minor compaction

2024-05-26 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir updated PHOENIX-7314:
---
Description: Phoenix TTL is currently used for major compaction only. When 
max lookback is enabled on a table, PhoenixTTL leads to retaining all cell 
versions until the next major compaction. This improvement is for enabling 
Phoenix TTL, more specifically CompactionScanner, for flushes and minor 
compaction to remove the live or deleted cell versions beyond the max lookback 
window during flushes and minor compactions.  (was: Phoenix TTL is currently 
used for major compaction only. When max lookback is enabled on a table, 
PhoenixTTL leads to retaining all cell versions until the next major 
compaction. This improvement is for enabling Phoenix TTL for flushes and minor 
compaction to remove the live or deleted cell versions beyond the max lookback 
window during flushes and minor compactions.)

> Enable CompactionScanner for flushes and minor compaction
> -
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> Phoenix TTL is currently used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL, more 
> specifically CompactionScanner, for flushes and minor compaction to remove 
> the live or deleted cell versions beyond the max lookback window during 
> flushes and minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7314) Enable CompactionScanner for flushes and minor compaction

2024-05-26 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir updated PHOENIX-7314:
---
Summary: Enable CompactionScanner for flushes and minor compaction  (was: 
Phoenix TTL for flushes and minor compaction)

> Enable CompactionScanner for flushes and minor compaction
> -
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> Phoenix TTL is currently used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL for 
> flushes and minor compaction to remove the live or deleted cell versions 
> beyond the max lookback window during flushes and minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7314) Phoenix TTL for flushes and minor compaction

2024-05-26 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir reassigned PHOENIX-7314:
--

Assignee: Kadir Ozdemir

> Phoenix TTL for flushes and minor compaction
> 
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Assignee: Kadir Ozdemir
>Priority: Major
>
> Phoenix TTL is currently used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL for 
> flushes and minor compaction to remove the live or deleted cell versions 
> beyond the max lookback window during flushes and minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7314) Phoenix TTL for flushes and minor compaction

2024-05-26 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir updated PHOENIX-7314:
---
Description: Phoenix TTL is currently used for major compaction only. When 
max lookback is enabled on a table, PhoenixTTL leads to retaining all cell 
versions until the next major compaction. This improvement is for enabling 
Phoenix TTL for flushes and minor compaction to remove the live or deleted cell 
versions beyond the max lookback window during flushes and minor compactions.  
(was: Phoenix TTL currently is used for major compaction only. When max 
lookback is enabled on a table, PhoenixTTL leads to retaining all cell versions 
until the next major compaction. This improvement is for enabling Phoenix TTL 
for minor compaction to remove the cell versions beyond max lookback window 
during minor compactions.)

> Phoenix TTL for flushes and minor compaction
> 
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Priority: Major
>
> Phoenix TTL is currently used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL for 
> flushes and minor compaction to remove the live or deleted cell versions 
> beyond the max lookback window during flushes and minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7314) Phoenix TTL for flushes and minor compaction

2024-05-26 Thread Kadir Ozdemir (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kadir Ozdemir updated PHOENIX-7314:
---
Summary: Phoenix TTL for flushes and minor compaction  (was: Phoenix TTL 
for minor compaction)

> Phoenix TTL for flushes and minor compaction
> 
>
> Key: PHOENIX-7314
> URL: https://issues.apache.org/jira/browse/PHOENIX-7314
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Kadir Ozdemir
>Priority: Major
>
> Phoenix TTL currently is used for major compaction only. When max lookback is 
> enabled on a table, PhoenixTTL leads to retaining all cell versions until the 
> next major compaction. This improvement is for enabling Phoenix TTL for minor 
> compaction to remove the cell versions beyond max lookback window during 
> minor compactions.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-7192) IDE shows errors on JSON comment

2024-05-23 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri reassigned PHOENIX-7192:
-

Assignee: Ranganath Govardhanagiri

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-7318) Support JSON_MODIFY in Upserts

2024-05-22 Thread Ranganath Govardhanagiri (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-7318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ranganath Govardhanagiri updated PHOENIX-7318:
--
Summary: Support JSON_MODIFY in Upserts  (was: Support JSON_MODIFY is 
Upserts)

> Support JSON_MODIFY in Upserts
> --
>
> Key: PHOENIX-7318
> URL: https://issues.apache.org/jira/browse/PHOENIX-7318
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ranganath Govardhanagiri
>Assignee: Ranganath Govardhanagiri
>Priority: Major
> Fix For: 5.3.0
>
>
> JSON_MODIFY implementation targeted as part of PHOENIX-7072 only supports in 
> Atomic Upserts. This Jira is to support in Upsert Statements. A POC initially 
> tried had issue with auto commit, so this is a separate work item to rethink 
> on the implemenation



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


<    1   2   3   4   5   6   7   8   9   10   >