[jira] [Commented] (PHOENIX-6978) Redesign Phoenix TTL for Views

2024-07-16 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17866411#comment-17866411
 ] 

Istvan Toth commented on PHOENIX-6978:
--

This causes _ViewTTLIT_ and _ViewTTLWithLongViewIndexEnabledIT_ to hang with 
the HBase 2.6 profile, [~jisaac].

If we are lucky, then the fix from PHOENIX-7339 will also work for those tests.

> Redesign Phoenix TTL for Views
> --
>
> Key: PHOENIX-6978
> URL: https://issues.apache.org/jira/browse/PHOENIX-6978
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
>
> With Phoenix TTL for views (PHOENIX-3725), the basic gist was the TTL should 
> be a Phoenix view level setting instead of being at the table level as 
> implemented in HBase. More details on the old design are here ([Phoenix TTL 
> old design 
> doc|https://docs.google.com/document/d/1aZWhJQCARBVt9VIXNgINCB8O0fk2GucxXeu7472SVL8/edit#heading=h.kpf13qig3vdl]).
> Both HBase TTL and Phoenix TTL rely on applying expiration logic during the 
> scanning phase when serving query results and apply deletion logic when 
> pruning the rows from the store. In HBase, the pruning is achieved during the 
> compaction phase.
> The initial design and implementation of Phoenix TTL for views used the MR 
> framework to run delete jobs to prune away the expired rows. We knew this was 
> a sub-optimal solution since it required managing and monitoring MR jobs. It 
> would also have introduced additional delete markers which would have 
> temporarily added more rows (delete markers) have made the scans less 
> performant.
> Using the HBase compaction framework instead to prune away the expired rows 
> would fit nicely into the existing architecture and would be efficient like 
> pruning the HBase TTL rows. 
> This jira proposes a redesign of Phoenix TTL for Views using PHOENIX-6888 and 
> PHOENIX-4555
> [New Design 
> doc|https://docs.google.com/document/d/1D2B0G_sVe9eE66bk-sxUfSgoGtQCvD7xBZRxZz-Q1TM/edit]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7361) Build PQS with Phoenix 5.2.0

2024-07-15 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17866085#comment-17866085
 ] 

Istvan Toth commented on PHOENIX-7361:
--

Curator version and Curator test problem fixes taken from OMID.

> Build PQS with Phoenix 5.2.0
> 
>
> Key: PHOENIX-7361
> URL: https://issues.apache.org/jira/browse/PHOENIX-7361
> Project: Phoenix
>  Issue Type: Task
>  Components: queryserver
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Now that 5.2.0 is out, we can update the PQS build to use it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7359) BackwardCompatibilityIT throws NPE with Hbase 2.6 profile

2024-07-15 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17866035#comment-17866035
 ] 

Istvan Toth commented on PHOENIX-7359:
--

This is as simple as adding an empty entry for Hbase 2.6 into the config file.

> BackwardCompatibilityIT throws NPE with Hbase 2.6 profile
> -
>
> Key: PHOENIX-7359
> URL: https://issues.apache.org/jira/browse/PHOENIX-7359
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Viraj Jasani
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
>
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.end2end.BackwardCompatibilityTestUtil.computeClientVersions(BackwardCompatibilityTestUtil.java:134)
>   at 
> org.apache.phoenix.end2end.BackwardCompatibilityIT.data(BackwardCompatibilityIT.java:110)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7357) New variable length binary data type: VARBINARY_ENCODED

2024-07-12 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17865440#comment-17865440
 ] 

Istvan Toth commented on PHOENIX-7357:
--

Wouldn't it be easier to keep zero/xff as the separator, and instead escape 
zero bytes ?

i.e. 

use x01 as the escape character, and encode zero as  x0101 and x01 as x0102 ?
I guess that this will result in unexpected order, but I suspect that there is 
no way around that.


> New variable length binary data type: VARBINARY_ENCODED
> ---
>
> Key: PHOENIX-7357
> URL: https://issues.apache.org/jira/browse/PHOENIX-7357
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.3.0
>
>
> As of today, Phoenix provides several variable length as well as fixed length 
> data types. One of the variable length data types is VARBINARY. It is 
> variable length binary blob. Using VARBINARY as only primary key can be 
> considered as if using HBase row key.
> HBase provides a single row key. Any client application that requires using 
> more than one column for primary keys, using HBase requires special handling 
> of storing both column values as a single binary row key. Phoenix provides 
> the ability to use more than one primary key by providing composite primary 
> keys. Composite primary key can contain any number of primary key columns. 
> Phoenix also provides the ability to add new nullable primary key columns to 
> the existing composite primary keys. Phoenix uses HBase as its backing store. 
> In order to provide the ability for users to define multiple primary keys, 
> Phoenix internally concatenates binary encoded values of each primary key 
> column value and uses concatenated binary value as HBase row key. In order to 
> efficiently concatenate as well as retrieve individual primary key values, 
> Phoenix implements two ways:
>  # For fixed length columns: The length of the given column is determined by 
> the maximum length of the column. As part of the read flow, while iterating 
> through the row key, fixed length numbers of bytes are retrieved while 
> reading. While writing, if the original encoded value of the given column has 
> less number of bytes, additional null bytes (\x00) are padded until the fixed 
> length is filled up. Hence, for smaller values, we end up wasting some space.
>  # For variable length columns: Since we cannot know the length of the value 
> of variable length data type in advance, a separator or terminator byte is 
> used. Phoenix uses null byte as separator (\x00) byte. As of today, VARCHAR 
> is the most commonly used variable length data type and since VARCHAR 
> represents String, null byte is not part of valid String characters. Hence, 
> it can be effectively used to determine when to terminate the given VARCHAR 
> value.
>  
> The null byte (\x00) works fine as a separator for VARCHAR. However, it 
> cannot be used as a separator byte for VARBINARY because VARBINARY can 
> contain any binary blob values. Due to this, Phoenix has restrictions for 
> VARBINARY type: 
>  
>  # It can only be used as the last part of the composite primary key.
>  # It cannot be used as a DESC order primary key column.
>  
> Using VARBINARY data type as an earlier portion of the composite primary key 
> is a valid use case. One can also use multiple VARBINARY primary key columns. 
> After all, Phoenix provides the ability to use multiple primary key columns 
> for users.
> Besides, using secondary index on data table means that the composite primary 
> key of secondary index table includes: 
>   …  
>   … 
>  
> As primary key columns are appended to the secondary indexes columns, one 
> cannot create a secondary index on any VARBINARY column.
> The proposal of this Jira is to introduce new data type 
> {*}VARBINARY_ENCODED{*}, which has no restriction of being considered as 
> composite primary key prefix or using it as DESC ordered column.
> This means, we need to effectively distinguish where the variable length 
> binary data terminates in the absence of fixed length information.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7353) Disable remote procedure delay in TransformToolIT

2024-07-10 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17864630#comment-17864630
 ] 

Istvan Toth commented on PHOENIX-7353:
--

This doesn't work.
I must have forgotten to enable the 2.6 profile when testing.

> Disable remote procedure delay in TransformToolIT
> -
>
> Key: PHOENIX-7353
> URL: https://issues.apache.org/jira/browse/PHOENIX-7353
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.1, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
> Fix For: 5.2.1, 5.3.0
>
>
> Same issue as PHOENIX-7339 , we just need to apply the same fix to this test.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-6719) Duplicate Salt Columns in Schema Registry after ALTER

2024-07-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17864143#comment-17864143
 ] 

Istvan Toth edited comment on PHOENIX-6719 at 7/9/24 11:27 AM:
---

-This bug is present as far back as at least 5.0.-
-We should probably backport the fixt to the 5.1 branch.-

On second look, this bug seems to have been present on 5.0, which used 
table.getColumns() to get the colums, but not on 5.1, which uses system.catalog 
directly. 


was (Author: stoty):
This bug is present as far back as at least 5.0.
We should probably backport the fixt to the 5.1 branch.


> Duplicate Salt Columns in Schema Registry after ALTER
> -
>
> Key: PHOENIX-6719
> URL: https://issues.apache.org/jira/browse/PHOENIX-6719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> When a table or view is change-detection enabled, we have to update the 
> schema registry each time the schema is ALTERed. This is done by calculating 
> the old PTable and applying the changed metadata edits to create a new 
> PTable, which gets exported to the schema registry.
> There's a bug in this calculation logic for salted tables, where the virtual 
> salt column is on the old PTable, but gets added by the Builder logic of the 
> new PTable. The result is an incorrect PTable (and schema) with an extra salt 
> column. 
> I discovered this while testing on a draft of PHOENIX-5517. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6719) Duplicate Salt Columns in Schema Registry after ALTER

2024-07-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17864143#comment-17864143
 ] 

Istvan Toth commented on PHOENIX-6719:
--

This bug is present as far back as at least 5.0.
We should probably backport the fixt to the 5.1 branch.


> Duplicate Salt Columns in Schema Registry after ALTER
> -
>
> Key: PHOENIX-6719
> URL: https://issues.apache.org/jira/browse/PHOENIX-6719
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> When a table or view is change-detection enabled, we have to update the 
> schema registry each time the schema is ALTERed. This is done by calculating 
> the old PTable and applying the changed metadata edits to create a new 
> PTable, which gets exported to the schema registry.
> There's a bug in this calculation logic for salted tables, where the virtual 
> salt column is on the old PTable, but gets added by the Builder logic of the 
> new PTable. The result is an incorrect PTable (and schema) with an extra salt 
> column. 
> I discovered this while testing on a draft of PHOENIX-5517. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7264) Admin.flush() hangs in HBase 3 while the clock is stopped

2024-07-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863786#comment-17863786
 ] 

Istvan Toth commented on PHOENIX-7264:
--

[~vjasani] 's fix is expected to work for 3.0 as well.
Will need to test.

> Admin.flush() hangs in HBase 3 while the clock is stopped
> -
>
> Key: PHOENIX-7264
> URL: https://issues.apache.org/jira/browse/PHOENIX-7264
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Several tests using EnvironmentEdgeManager are hanging and/or failing with 
> HBase 3 .
> I don't really know how to fix them, as the test break if we let the clock 
> run.
> HBase doesn't seem to care about this use case, so we probably just have to 
> disable these tests on Hbase 3.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7215) Support explicit types for all literals

2024-07-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863696#comment-17863696
 ] 

Istvan Toth commented on PHOENIX-7215:
--

Yes, based on the above it sounds like a change in the grammar is needed.

> Support explicit types for all literals
> ---
>
> Key: PHOENIX-7215
> URL: https://issues.apache.org/jira/browse/PHOENIX-7215
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
>
> Phoenix accepts the standard literal types for date/time types.
> _select TIMESTAMP '2000-01-01';_ works.
> However, according to the standard
> _select INTEGER 1000;_ and _select INTEGER '1000';_ should also work, but it 
> doesn't.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7339) HBase flushes with custom clock needs to disable remote procedure delay

2024-07-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863687#comment-17863687
 ] 

Istvan Toth edited comment on PHOENIX-7339 at 7/8/24 7:31 AM:
--

It's org.apache.phoenix.end2end.transform.TransformToolIT , [~vjasani].

The rest seem to be OK now. (or at least not haninging)


was (Author: stoty):
It's org.apache.phoenix.end2end.transform.TransformToolIT , [~vjasani].

> HBase flushes with custom clock needs to disable remote procedure delay
> ---
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Test
>Reporter: Istvan Toth
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) HBase flushes with custom clock needs to disable remote procedure delay

2024-07-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17863687#comment-17863687
 ] 

Istvan Toth commented on PHOENIX-7339:
--

It's org.apache.phoenix.end2end.transform.TransformToolIT , [~vjasani].

> HBase flushes with custom clock needs to disable remote procedure delay
> ---
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Test
>Reporter: Istvan Toth
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.2.1, 5.3.0
>
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856830#comment-17856830
 ] 

Istvan Toth commented on PHOENIX-7339:
--

I disabled at least some of these tests in my HBase 3 wip patch , I expect that 
those changes will help with 2.6 as well.

https://github.com/apache/phoenix/pull/1815

> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856829#comment-17856829
 ] 

Istvan Toth commented on PHOENIX-7339:
--

FYI [~vjasani].

> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856827#comment-17856827
 ] 

Istvan Toth commented on PHOENIX-7339:
--

I suspect that is caused by some operations hanging on Hbase 2.6 while the 
clock is stopped via EnvironmentEdgeManager.

I saw similar issues when working on HBase 3.0


> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856828#comment-17856828
 ] 

Istvan Toth commented on PHOENIX-7339:
--

org.apache.phoenix.end2end.MaxLookbackIT should finish in ~60 seconds, so it's 
probably easiest to start the investigation with that one.


> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856825#comment-17856825
 ] 

Istvan Toth commented on PHOENIX-7339:
--

Matches these error messages:


{noformat}
ERROR [ProcedureDispatcherTimeoutThread] 
procedure2.RemoteProcedureDispatcher$TimeoutExecutorThread(323): DelayQueue for 
RemoteProcedureDispatcher is not empty when timed waiting elapsed. If this is 
repeated consistently, it means no element is getting expired from the queue 
and it might freeze the system. Queue: 
[containedObject=stoty-precision-5570,41707,1718986759014, 
timeout=1718986835147, delay=150, operations=[pid=161, ppid=160, 
state=RUNNABLE; org.apache.hadoop.hbase.master.procedure.FlushRegionProcedure]]
{noformat}




{noformat}
stoty@stoty-Precision-5570:~/workspaces/apache-phoenix/phoenix/phoenix-core/target/failsafe-reports
 (master) $ grep -l "DelayQueue for RemoteProcedureDispatcher is not empty when 
" *
org.apache.phoenix.end2end.IndexRepairRegionScannerIT-output.txt
org.apache.phoenix.end2end.IndexScrutinyWithMaxLookbackIT-output.txt
org.apache.phoenix.end2end.MaxLookbackExtendedIT-output.txt
org.apache.phoenix.end2end.MaxLookbackIT-output.txt
org.apache.phoenix.end2end.TableTTLIT-output.txt
org.apache.phoenix.end2end.transform.TransformToolIT-output.txt
{noformat}



> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856822#comment-17856822
 ] 

Istvan Toth commented on PHOENIX-7339:
--

FYI [~RichardAntal]


> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856469#comment-17856469
 ] 

Istvan Toth edited comment on PHOENIX-7339 at 6/21/24 4:57 PM:
---

On my Alder Lake i7, the phoenix-core test suite takes exactly 3 hours with 
HBase 2.5.


was (Author: stoty):
On my Alder Lake i7, the phoenix-core test suite takes exactly 3 hours with 
HBase 2.6.

> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856821#comment-17856821
 ] 

Istvan Toth commented on PHOENIX-7339:
--

Looks like several tests are hanging, leaving only one or two actual threads 
for running tests:

These seem to be the hanging tests:

org.apache.phoenix.end2end.IndexRepairRegionScannerIT
org.apache.phoenix.end2end.IndexScrutinyWithMaxLookbackIT
org.apache.phoenix.end2end.MaxLookbackExtendedIT
org.apache.phoenix.end2end.MaxLookbackIT
org.apache.phoenix.end2end.TableTTLIT


> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856469#comment-17856469
 ] 

Istvan Toth commented on PHOENIX-7339:
--

On my Alder Lake i7, the phoenix-core test suite takes exactly 3 hours with 
HBase 2.6.

> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7339) Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6

2024-06-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856429#comment-17856429
 ] 

Istvan Toth commented on PHOENIX-7339:
--

Unfortunately, there is no obvious way to test this without committing a change.

Perhaps the easiest would be to a add a test branch to the job.

> Multibranch Jenkins Job takes more than 5 hours with Hbase 2.6
> --
>
> Key: PHOENIX-7339
> URL: https://issues.apache.org/jira/browse/PHOENIX-7339
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Priority: Major
>
> The Job takes ~3 hours with 2.4 , ~3.5 hours with 2.5 and is interrupted 
> after 5 hours with 2.6.
> While I did not see OOM errors, this could still be GC thrashing, as newer 
> HBase / Hadoop version use more heap.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7130) Support skipping of shade sources jar creation

2024-06-19 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856250#comment-17856250
 ] 

Istvan Toth commented on PHOENIX-7130:
--

This was also on 5.1 already.

> Support skipping of shade sources jar creation
> --
>
> Key: PHOENIX-7130
> URL: https://issues.apache.org/jira/browse/PHOENIX-7130
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix
>Affects Versions: 5.2.1, 5.3.0
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Minor
>  Labels: build
> Fix For: 5.2.1, 5.3.0
>
>
> Shade sources jar creation takes a lot of time and we do not want to do this 
> for every dev build (in our internal phoenix jenkins). Hence with this Jira, 
> will add a profile to disable shade jar creation optionally by running with 
> '-PskipShadeSources'.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7333) Add HBase 2.6 profile to multibranch Jenkins job

2024-06-19 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17856228#comment-17856228
 ] 

Istvan Toth commented on PHOENIX-7333:
--

Commited to master.
Thanks for the review [~richardantal].

I am assigning to this to you, [~richardantal] please backport this to the 
branches where you backport 2.6 support.

> Add HBase 2.6 profile to multibranch Jenkins job
> 
>
> Key: PHOENIX-7333
> URL: https://issues.apache.org/jira/browse/PHOENIX-7333
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Minor
>  Labels: test
> Fix For: 5.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7172) Support HBase 2.6

2024-06-18 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855925#comment-17855925
 ] 

Istvan Toth commented on PHOENIX-7172:
--

Thank you [~richardantal].

We definitely want this in 5.2 too.(The original plan was to release 5.2.1 with 
2.6 support).

We could also backport this to 5.1, but I don't remember if there was a 
consensus about that.

> Support HBase 2.6
> -
>
> Key: PHOENIX-7172
> URL: https://issues.apache.org/jira/browse/PHOENIX-7172
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.3.0
>
>
> HBase 2.6.0 release work is ongoing.
> Make sure Phoenix works with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7331) Fix incompatibilities with HBASE-28644

2024-06-17 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855642#comment-17855642
 ] 

Istvan Toth commented on PHOENIX-7331:
--

Yes, org.apache.phoenix.query.DelegateCell , which is only used in tests, and 
org.apache.phoenix.hbase.index.OffsetCell which is used in local secondary 
indexes.

Both of these are more like wrappers for existing cells.

> Fix incompatibilities with HBASE-28644
> --
>
> Key: PHOENIX-7331
> URL: https://issues.apache.org/jira/browse/PHOENIX-7331
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Critical
>
> These are the errors:
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.11.0:compile 
> (default-compile) on project phoenix-core-client: Compilation failure: 
> Compilation failure:
> [ERROR] 
> /home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[262,93]
>  incompatible types: java.util.List cannot be 
> converted to java.util.List
> [ERROR] 
> /home/stoty/workspaces/apache-phoenix/phoenix/phoenix-core-client/src/main/java/org/apache/phoenix/hbase/index/util/IndexManagementUtil.java:[248,69]
>  incompatible types: java.util.List cannot be 
> converted to java.util.List
> In IndexManagementUtil we can simply change the signature to cell.
> In PhoenixKeyValueUtil , we need to check for ExtendedCell, and clone it if 
> it is not.
> I'm pretty sure that there is already a utility method somewhere for this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6670) Optimize PhoenixKeyValueUtil#maybeCopyCell

2024-06-16 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17855492#comment-17855492
 ] 

Istvan Toth commented on PHOENIX-6670:
--

Another important difference is that KVs (really ExtendedCells) are modifiable, 
while Cells are effectively final.
Phoenix does use this feature in some plances.

> Optimize PhoenixKeyValueUtil#maybeCopyCell
> --
>
> Key: PHOENIX-6670
> URL: https://issues.apache.org/jira/browse/PHOENIX-6670
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Priority: Major
>
> PhoenixKeyValueUtil#maybeCopyCell copies every cell that is not a KeyValue to 
> a keyValue.
> It's point is to copy Off-Heap cells to the Heap, so that the values are kept 
> after the backing ByteBuffer is freed, and we avoid use-after-free errors.
> However, checking if a Cell is a KeyValue instance is a poor indication for 
> that, as there are a lot of Cell types that are not KeyValues, but are stored 
> on the heap, and do not need to be copied.
> Copying only ByteBufferExtendedCell instances instead would potentially be a 
> significat performance gain.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6373) Schema changes that require table re-writes can be supported (Online data format changes)

2024-06-13 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17854660#comment-17854660
 ] 

Istvan Toth commented on PHOENIX-6373:
--

Can we close this ticket, [~giskender] ?

> Schema changes that require table re-writes can be supported (Online data 
> format changes)
> -
>
> Key: PHOENIX-6373
> URL: https://issues.apache.org/jira/browse/PHOENIX-6373
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
>
> Today, using ALTER TABLE or ALTER INDEX commands, the user can make certain 
> changes to the schema. For example, changing certain table properties like 
> TTL and immutability, adding nullable columns/dropping non-pk columns are 
> allowed as well as certain index state changes. All of the allowed changes 
> don’t require the table to be re-written. As soon as the ALTER command 
> returns, most of the changes became available immediately (eg. index disable, 
> TTL) but some of them might take some time and the syntax lets you to specify 
> async (eg. Rebuild index) and depending on the client cache settings, some 
> changes never make it to the client (eg. select * that run from a client 
> never seeing the new column since its schema cache is not updated). 
> If the user wants to change the schema properties that require table 
> re-writes, it is blocked and ALTER fails. Phoenix lacks of the ability to 
> change some of the table schemas and attributes, such changing the row key 
> (primary keys), the type of a column, the table storage format, the column 
> encoding, etc. There is no way to make this changes with no or very minimal 
> service interruption.
> Design doc link: 
> [https://docs.google.com/document/d/1D24zRETMEetXvc3MSZj9WKYeUnooQX5BLv6gSDEmwfk/edit?usp=sharing]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7325) Connectors does not create source jars

2024-06-11 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853917#comment-17853917
 ] 

Istvan Toth commented on PHOENIX-7325:
--

This has probably been fixed in the meantime when we removed Phoenix 4.x 
support.
Still won't hurt to check.

> Connectors does not create source jars
> --
>
> Key: PHOENIX-7325
> URL: https://issues.apache.org/jira/browse/PHOENIX-7325
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Reporter: Istvan Toth
>Priority: Major
>
> When connectors is built, source jars are not built for at least for some of 
> the packages .
> Make sure that all packages which have Java code generate a source jar.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark

2024-06-10 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853889#comment-17853889
 ] 

Istvan Toth edited comment on PHOENIX-7132 at 6/11/24 3:47 AM:
---

Report of similar issue when using --jars:

It is reported fixed when switching to *.extraclasspath.*

{noformat}
(REDACTED executor 2): org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ClassNotFoundException: 
org.apache.phoenix.filter.ColumnProjectionFilter
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1368)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:961)
at 
org.apache.phoenix.mapreduce.PhoenixInputSplit.readFields(PhoenixInputSplit.java:91)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at 
org.apache.spark.SerializableWritable.$anonfun$readObject$1(SerializableWritable.scala:45)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1471)
at 
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2321)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2430)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2354)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2430)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2354)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:502)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:460)
at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: 
org.apache.phoenix.filter.ColumnProjectionFilter
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at 
org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:135)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1359)
... 32 more
{noformat}



was (Author: stoty):
Report of similar issue when using --jars:


{noformat}
(REDACTED executor 2): org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ClassNotFoundException: 
org.apache.phoenix.filter.ColumnProjectionFilter
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1368)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:961)
at 
org.apache.phoenix.mapreduce.PhoenixInputSplit.readFields(PhoenixInputSplit.java:91)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at 
org.apache.spark.SerializableWritable.$anonfun$readObject$1(SerializableWritable.scala:45)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1471)
at 
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at 

[jira] [Commented] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark

2024-06-10 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17853889#comment-17853889
 ] 

Istvan Toth commented on PHOENIX-7132:
--

Report of similar issue when using --jars:


{noformat}
(REDACTED executor 2): org.apache.hadoop.hbase.DoNotRetryIOException: 
java.lang.ClassNotFoundException: 
org.apache.phoenix.filter.ColumnProjectionFilter
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1368)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.toScan(ProtobufUtil.java:961)
at 
org.apache.phoenix.mapreduce.PhoenixInputSplit.readFields(PhoenixInputSplit.java:91)
at org.apache.hadoop.io.ObjectWritable.readObject(ObjectWritable.java:285)
at org.apache.hadoop.io.ObjectWritable.readFields(ObjectWritable.java:77)
at 
org.apache.spark.SerializableWritable.$anonfun$readObject$1(SerializableWritable.scala:45)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1471)
at 
org.apache.spark.SerializableWritable.readObject(SerializableWritable.scala:41)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1184)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2321)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2430)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2354)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2430)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2354)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2212)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:502)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:460)
at 
org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:87)
at 
org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:129)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:510)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: java.lang.ClassNotFoundException: 
org.apache.phoenix.filter.ColumnProjectionFilter
at java.net.URLClassLoader.findClass(URLClassLoader.java:387)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
at 
org.apache.hadoop.hbase.util.DynamicClassLoader.loadClass(DynamicClassLoader.java:135)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at 
org.apache.hadoop.hbase.protobuf.ProtobufUtil.toFilter(ProtobufUtil.java:1359)
... 32 more
{noformat}


> HBase cannot load ClientRpcControllerFactory when adding connector with the 
> --jar option to Spark
> -
>
> Key: PHOENIX-7132
> URL: https://issues.apache.org/jira/browse/PHOENIX-7132
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core
>Affects Versions: connectors-6.0.0, 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> I have noticed this today when working with the shaded spark connector jar:
> {noformat}
> 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured 
> "hbase.rpc.controllerfactory.class" 
> (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from 
> hbase-site.xml, falling back to use default RpcControllerFactory
> {noformat}
> -We should be able to avoid this by not relocating these classes at all.-
> -This is only a problem for shaded artifacts that do not include HBase, like 
> the shaded connectors and the planned phoenix-client-byo-hbase variant.-
> -In the full-fat shaded clients phoenix and HBase has the same shading, and 
> HBase is able to find the shaded class.-



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7320) Upgrade HBase 2.4 to 2.4.18

2024-05-31 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17851012#comment-17851012
 ] 

Istvan Toth commented on PHOENIX-7320:
--

I see some failures locally.
Need to check if those correlate with 2.4.17/2.4.18

mvn clean verify -Dhbase.profile=2.4 -am -pl phoenix-core 
-Dit.test=GlobalImmutableTxIndexIT*,GlobalMutableTxIndexIT*,ConcurrentMutationsExtendedIT*,GlobalImmutableTxIndexWithRegionMovesIT*,LoggingHAConnectionLimiterIT*
 

> Upgrade HBase 2.4 to 2.4.18
> ---
>
> Key: PHOENIX-7320
> URL: https://issues.apache.org/jira/browse/PHOENIX-7320
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> 2.4.18 has just been released.
> Update Phoenix to build with it.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7235) Don't directly depend on commons-compress

2024-05-23 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848925#comment-17848925
 ] 

Istvan Toth commented on PHOENIX-7235:
--

I mean that we could create replace the existing tar file with a JAR file with 
the same content.

> Don't directly  depend on commons-compress
> --
>
> Key: PHOENIX-7235
> URL: https://issues.apache.org/jira/browse/PHOENIX-7235
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Priority: Minor
>  Labels: beginner
>
> commons-compress is used only in a single test to uncompress a tar file.
> We could use a JAR file there instead, which does not have external 
> dependencies.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7192) IDE shows errors on JSON comment

2024-05-23 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17848923#comment-17848923
 ] 

Istvan Toth commented on PHOENIX-7192:
--

Yes, that's the one.

> IDE shows errors on JSON comment
> 
>
> Key: PHOENIX-7192
> URL: https://issues.apache.org/jira/browse/PHOENIX-7192
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Assignee: Ranganath Govardhanagiri
>Priority: Minor
>
> We have a few JSON files for tests, which include the ASF header.
> JSON does not allow comments, and my Eclipse sometimes flags this an error.
> Remove the ASF header.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7248) Add logging excludes to hadoop-mapreduce-client-app and hadoop-mapreduce-client-jobclient

2024-05-15 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17846578#comment-17846578
 ] 

Istvan Toth commented on PHOENIX-7248:
--

Backported to 5.2.

> Add logging excludes to hadoop-mapreduce-client-app and 
> hadoop-mapreduce-client-jobclient
> -
>
> Key: PHOENIX-7248
> URL: https://issues.apache.org/jira/browse/PHOENIX-7248
> Project: Phoenix
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.1, 5.3.0, 5.1.4
>
>
> Unwanted logging libraries are coming from these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7315) Add --add-host=host.docker.internal:host-gateway to the phoenixdb test docs

2024-05-13 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17845799#comment-17845799
 ] 

Istvan Toth edited comment on PHOENIX-7315 at 5/13/24 8:21 AM:
---

This is already done, I was looking at an old branch.


was (Author: stoty):
This is already done, I was looking at an branch.

> Add --add-host=host.docker.internal:host-gateway to the phoenixdb test docs
> ---
>
> Key: PHOENIX-7315
> URL: https://issues.apache.org/jira/browse/PHOENIX-7315
> Project: Phoenix
>  Issue Type: Improvement
>  Components: phoenix, queryserver
>Reporter: Istvan Toth
>Priority: Major
>
> I was running the phoenixdb tests in docker, and was thwarted again because 
> the docker commands do not work on Linux as written.
> We need to mention that _--add-host=host.docker.internal:host-gateway_ is 
> needed on Linux



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6917) Column alias not working properly in Python

2024-05-02 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842859#comment-17842859
 ] 

Istvan Toth commented on PHOENIX-6917:
--

There is no released version today that includes this fix.


> Column alias not working properly in Python
> ---
>
> Key: PHOENIX-6917
> URL: https://issues.apache.org/jira/browse/PHOENIX-6917
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Affects Versions: python-phoenixdb-1.2.1
>Reporter: Satya Kommula
>Assignee: Satya Kommula
>Priority: Major
> Fix For: python-phoenixdb-1.2.2
>
>
> Get the columnLabel (the “as name”) rather than the columnName with a cursor.
> {code:java}
> calcite :sql> select c1 as hello, c2 as world from int_tbl;
> +-+-+
> | c1  | c2  |
> |-+-|
> | 5   | 0   |
> | -123| 123 |
> | 1   | 123456  |
> | -123456 | -123456 |
> | 10  | 50  |
> | -2147483648 | 1   |
> | 2147483647  | -2147483648 |
> |   | 10  |
> |   |   |
> | 1   | 1   |
> +-+-+{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7311) 5.2 multibranch build is not getting triggered automatically

2024-05-01 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17842840#comment-17842840
 ] 

Istvan Toth commented on PHOENIX-7311:
--

We need to edit the Jenkins job, which can be done from the Jenkins UI.

> 5.2 multibranch build is not getting triggered automatically
> 
>
> Key: PHOENIX-7311
> URL: https://issues.apache.org/jira/browse/PHOENIX-7311
> Project: Phoenix
>  Issue Type: Task
>Reporter: Viraj Jasani
>Priority: Major
>
> Similar to master and 5.1 branches, any commits landing on 5.2 is not 
> triggering multibranch builds on 
> [https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.2/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7303) fix CVE-2024-29025 in netty package

2024-04-10 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17835828#comment-17835828
 ] 

Istvan Toth commented on PHOENIX-7303:
--

We track Omid issues in its own JIRA project, [~nikitapande].
Please open an Omid ticket and update the commit message for the Omid patch. 
(You may want to link it to this one)

> fix CVE-2024-29025 in netty package
> ---
>
> Key: PHOENIX-7303
> URL: https://issues.apache.org/jira/browse/PHOENIX-7303
> Project: Phoenix
>  Issue Type: Improvement
>  Components: omid, phoenix
>Reporter: Nikita Pande
>Assignee: Nikita Pande
>Priority: Major
>
> [CVE-2024-29025|https://github.com/advisories/GHSA-5jpm-x58v-624v] is the CVE 
> for all netty-codec-http <  4.1.108.Final



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL

2024-04-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17834135#comment-17834135
 ] 

Istvan Toth commented on PHOENIX-7290:
--

Thanks for confirming, [~zidanej].

> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL
> 
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Jeff
>Assignee: Istvan Toth
>Priority: Major
> Attachments: SQLSquirrel_error_stack.txt, spring-jdbc_error_stack.txt
>
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL

2024-04-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833981#comment-17833981
 ] 

Istvan Toth edited comment on PHOENIX-7290 at 4/4/24 3:31 PM:
--

I was able to repro the issue with SquirrelSQL and confirm that the attached 
patch fixes the problem.


was (Author: stoty):
I was able to repro the issues with SquirrelSQL and confirm that the attached 
patch fixes the problem.

> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL
> 
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeff
>Assignee: Istvan Toth
>Priority: Major
> Attachments: SQLSquirrel_error_stack.txt, spring-jdbc_error_stack.txt
>
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7132) HBase cannot load ClientRpcControllerFactory when adding connector with the --jar option to Spark

2024-04-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833984#comment-17833984
 ] 

Istvan Toth commented on PHOENIX-7132:
--

While not the same issue as PHOENIX-7290, it could probably be fixed by using a 
similar approach.

> HBase cannot load ClientRpcControllerFactory when adding connector with the 
> --jar option to Spark
> -
>
> Key: PHOENIX-7132
> URL: https://issues.apache.org/jira/browse/PHOENIX-7132
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core
>Affects Versions: connectors-6.0.0, 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> I have noticed this today when working with the shaded spark connector jar:
> {noformat}
> 23/12/01 06:01:34 WARN ipc.RpcControllerFactory: Cannot load configured 
> "hbase.rpc.controllerfactory.class" 
> (org.apache.hadoop.hbase.ipc.controller.ClientRpcControllerFactory) from 
> hbase-site.xml, falling back to use default RpcControllerFactory
> {noformat}
> -We should be able to avoid this by not relocating these classes at all.-
> -This is only a problem for shaded artifacts that do not include HBase, like 
> the shaded connectors and the planned phoenix-client-byo-hbase variant.-
> -In the full-fat shaded clients phoenix and HBase has the same shading, and 
> HBase is able to find the shaded class.-



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL

2024-04-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17833981#comment-17833981
 ] 

Istvan Toth commented on PHOENIX-7290:
--

I was able to repro the issues with SquirrelSQL and confirm that the attached 
patch fixes the problem.

> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory from SquirrelSQL
> 
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeff
>Assignee: Istvan Toth
>Priority: Major
> Attachments: SQLSquirrel_error_stack.txt, spring-jdbc_error_stack.txt
>
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory

2024-03-28 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831651#comment-17831651
 ] 

Istvan Toth edited comment on PHOENIX-7290 at 3/28/24 7:24 AM:
---

Thank you.

Looking at the code, the problem seems to be that we DO use 
PhoenixContextExecutor in CQSI.init(), but we refer to that class in the 
constructor, which is not running inside init().

Looks like this would be fixed by moving the cache initialization into the 
init() function.

i.e.
This initialization:
 
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L474

should be moved somewhere inside this block:
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3560

I don't have the time to work on this now.
Could you make this change and test to see if it fixes the issue [~zidanej] ?



was (Author: stoty):
Thank you.

Looking at the code, the problem seems to be that we DO use 
PhoenixContextExecutor in CQSI.init(), but we refer to that class in the 
constructor, which is not running inside init().

Looks like this would be fixed by moving the cache initialization into the 
init() function.

i.e.
This initialization:
 
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L474

should be moved somewhere inside this block:
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3560

I don't have the time to work on this now.
Could you make this change and test to see if it fixes the isse [~zidanej] ?


> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory
> ---
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeff
>Priority: Major
> Attachments: SQLSquirrel_error_stack.txt, spring-jdbc_error_stack.txt
>
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory

2024-03-28 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831651#comment-17831651
 ] 

Istvan Toth commented on PHOENIX-7290:
--

Thank you.

Looking at the code, the problem seems to be that we DO use 
PhoenixContextExecutor in CQSI.init(), but we refer to that class in the 
constructor, which is not running inside init().

Looks like this would be fixed by moving the cache initialization into the 
init() function.

i.e.
This initialization:
 
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L474

should be moved somewhere inside this block:
https://github.com/apache/phoenix/blob/afdba89b005cb167fafe8530de750ad297f86d5a/phoenix-core-client/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L3560

I don't have the time to work on this now.
Could you make this change and test to see if it fixes the isse [~zidanej] ?


> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory
> ---
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeff
>Priority: Major
> Attachments: SQLSquirrel_error_stack.txt, spring-jdbc_error_stack.txt
>
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7292) Update .asf.yaml based on HBase

2024-03-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831508#comment-17831508
 ] 

Istvan Toth edited comment on PHOENIX-7292 at 3/27/24 7:20 PM:
---

Committed to master.
Did not set a version, as it is just Github metadata.

Thanks for the reply on the DISCUSS thread and the review [~vjasani].


was (Author: stoty):
Committed to master.
Did not set a version, as it is just Github metadata.

> Update .asf.yaml based on HBase
> ---
>
> Key: PHOENIX-7292
> URL: https://issues.apache.org/jira/browse/PHOENIX-7292
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> As discussed on the mailing list, the current settings result in far too 
> noisy JIRA tickets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7292) Update .asf.yaml based on HBase

2024-03-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831504#comment-17831504
 ] 

Istvan Toth commented on PHOENIX-7292:
--

Committed to master.

GitHub seems to have applied the changes:


{noformat}
stoty@stoty-Precision-5570:~/workspaces/apache-phoenix/phoenix (master) $ git 
push origin master
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 20 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 493 bytes | 493.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0), pack-reused 0
remote: remote: 
remote: remote: GitHub found 11 vulnerabilities on apache/phoenix's default 
branch (4 high, 5 moderate, 2 low). To find out more, visit:
remote: remote:  https://github.com/apache/phoenix/security/dependabot  
  
remote: remote: 
remote: To github:apache/phoenix.git
remote:ff37830378..afdba89b00  afdba89b005cb167fafe8530de750ad297f86d5a -> 
master
remote: Syncing refs/heads/master...
remote: Sending notification emails to: ['"comm...@phoenix.apache.org" 
']
remote: GitHub meta-data changed, updating...
remote: GitHub repository meta-data updated!
remote: Updating notification schemes for repository: 
remote: - updating scheme jira_options: 'link label comment' -> 'link label'
remote: 
To https://gitbox.apache.org/repos/asf/phoenix.git
   ff37830378..afdba89b00  master -> master

{noformat}


> Update .asf.yaml based on HBase
> ---
>
> Key: PHOENIX-7292
> URL: https://issues.apache.org/jira/browse/PHOENIX-7292
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> As discussed on the mailing list, the current settings result in far too 
> noisy JIRA tickets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7292) Update .asf.yaml based on HBase's

2024-03-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831376#comment-17831376
 ] 

Istvan Toth commented on PHOENIX-7292:
--

I have also copied the structure from HBase, and added a few labels.

> Update .asf.yaml based on HBase's
> -
>
> Key: PHOENIX-7292
> URL: https://issues.apache.org/jira/browse/PHOENIX-7292
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> As discussed on the mailing list, the current settings result in far too 
> noisy JIRA tickets.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7290) Cannot load or instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory

2024-03-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17831199#comment-17831199
 ] 

Istvan Toth commented on PHOENIX-7290:
--

I have seen this error message in IDEs, where a full project rebuild ususally 
fixed it.

There is some classloader skullduggery in the Phoenix JDBC entry points which 
is probably releated.
Maybe we need to perform the same the same when loading that class.

Do you have a full stack trace, [~zidanej] ?

> Cannot load or instantiate class 
> org.apache.phoenix.query.DefaultGuidePostsCacheFactory
> ---
>
> Key: PHOENIX-7290
> URL: https://issues.apache.org/jira/browse/PHOENIX-7290
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jeff
>Priority: Major
>
> Recently we're trying to update to phoenix 5.1.3 and we're running into an 
> issue.
> {code:java}
> org.apache.phoenix.exception.PhoenixNonRetryableRuntimeException: Could not 
> load/instantiate class org.apache.phoenix.query.DefaultGuidePostsCacheFactory 
> {code}
> I believe there are other areas where this has been brought up such as in SO:
> [https://stackoverflow.com/questions/73194696/phoenixnonretryableruntimeexception-could-not-load-instantiate-class-org-apache]
> The issue seems to have been introduced in 5.1.0, anything in 4.8 or below 
> seems to work fine.  
> Steps to reproduce:
>  # Use Squirrel SQL 4.7.1
>  # Create the drivers and load phoenix 5.1.3 into the class path
>  # Define alias and test connection



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7248) Add logging excludes to hadoop-mapreduce-client-app and hadoop-mapreduce-client-jobclient

2024-03-24 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830341#comment-17830341
 ] 

Istvan Toth commented on PHOENIX-7248:
--

And this has the potential to mess up logging.

> Add logging excludes to hadoop-mapreduce-client-app and 
> hadoop-mapreduce-client-jobclient
> -
>
> Key: PHOENIX-7248
> URL: https://issues.apache.org/jira/browse/PHOENIX-7248
> Project: Phoenix
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0, 5.1.4
>
>
> Unwanted logging libraries are coming from these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7248) Add logging excludes to hadoop-mapreduce-client-app and hadoop-mapreduce-client-jobclient

2024-03-24 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17830339#comment-17830339
 ] 

Istvan Toth commented on PHOENIX-7248:
--

Yes, I think we should.
It's quite straightforward, and I don't seem how this could break anything.

> Add logging excludes to hadoop-mapreduce-client-app and 
> hadoop-mapreduce-client-jobclient
> -
>
> Key: PHOENIX-7248
> URL: https://issues.apache.org/jira/browse/PHOENIX-7248
> Project: Phoenix
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0, 5.1.4
>
>
> Unwanted logging libraries are coming from these.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7250) Fix HBase log level in tests

2024-03-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829414#comment-17829414
 ] 

Istvan Toth commented on PHOENIX-7250:
--

Committed to master.

Thanks for the review [~RichardAntal] .

You may want to consider backporting this to 5.2, [~vjasani].

> Fix HBase log level in tests
> 
>
> Key: PHOENIX-7250
> URL: https://issues.apache.org/jira/browse/PHOENIX-7250
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
>
> When switching to log4j2, I made a typo in the logging config, which results
> in HBase log level being set to WARN, instead of DEBUG that it was previously.
> This makes debugging some test failures hard to impossbile.
> Fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7250) Fix HBase log level in tests

2024-03-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17829413#comment-17829413
 ] 

Istvan Toth commented on PHOENIX-7250:
--

Does not apply to 5.1, which uses log4j1 / reload4j.

> Fix HBase log level in tests
> 
>
> Key: PHOENIX-7250
> URL: https://issues.apache.org/jira/browse/PHOENIX-7250
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
>
> When switching to log4j2, I made a typo in the logging config, which results
> in HBase log level being set to WARN, instead of DEBUG that it was previously.
> This makes debugging some test failures hard to impossbile.
> Fix this.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7285) Upgade Zookeeper to 3.8.4

2024-03-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828638#comment-17828638
 ] 

Istvan Toth edited comment on PHOENIX-7285 at 3/20/24 3:34 PM:
---

For 5.1, we should only update the profiles where we are already on ZK 3.8.


was (Author: stoty):
For 5.1, we should only update the profiles where are already on ZK 3.8.

> Upgade Zookeeper to 3.8.4
> -
>
> Key: PHOENIX-7285
> URL: https://issues.apache.org/jira/browse/PHOENIX-7285
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7285) Upgade Zookeeper to 3.8.4

2024-03-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17828638#comment-17828638
 ] 

Istvan Toth commented on PHOENIX-7285:
--

For 5.1, we should only update the profiles where are already on ZK 3.8.

> Upgade Zookeeper to 3.8.4
> -
>
> Key: PHOENIX-7285
> URL: https://issues.apache.org/jira/browse/PHOENIX-7285
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7269) Upgrade fails when HBase table for index is missing

2024-03-13 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825974#comment-17825974
 ] 

Istvan Toth edited comment on PHOENIX-7269 at 3/13/24 9:14 AM:
---

Digging a bit deeper, the REDACTED HBase table exists, but the there is no 
metadata in system.catalog for it.
It is only referred from the base table syscat rows in the last (COLUMN_FAMILY) 
field of the PK, which seems to be the old way of linking data and index tables.

So maybe we should add a step where we verify that the table referred in the 
old-style link cells can be resolved, and removing those cells if they are not.

For reference, here are the dangling cells in system:catalog which refer to 
REDACTED_INDEX_TABLE, which does not exist in system.catalog (but happens to 
exist in HBase).

{noformat}
\x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:LINK_TYPE, timestamp=1663995086420, value=\x01
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_SEQ_NUM, timestamp=1663995086420, 
value=\x80\x00\x00\x00\x00\x00\x00\x00
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_TYPE, timestamp=1663995086420, value=i
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:_0, timestamp=1663995086420, value=x
{noformat}



was (Author: stoty):
Digging a bit deeper, the REDACTED HBase table exists, but the there is no 
metadata in system.catalog for it.
It is only referred from the base table syscat rows in the last (COLUMN_FAMILY) 
field of the PK, which seems to be the old way of linking data and index tables.

So maybe we should add a step where we verify that the table referred in the 
old-style link cells can be resolved, and removing those cells if they are not.

For reference, here are the dangling cells in system:catalog which refer to 
REDACTED_INDEX_TABLE, which does not exist in system.catalog (but happens to 
exist in HBase.

{noformat}
\x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:LINK_TYPE, timestamp=1663995086420, value=\x01
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_SEQ_NUM, timestamp=1663995086420, 
value=\x80\x00\x00\x00\x00\x00\x00\x00
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_TYPE, timestamp=1663995086420, value=i
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:_0, timestamp=1663995086420, value=x
{noformat}


> Upgrade fails when HBase table for index is missing
> ---
>
> Key: PHOENIX-7269
> URL: https://issues.apache.org/jira/browse/PHOENIX-7269
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Major
>
> When attempting to upgrade the metadata during upgrade, the process is 
> aborted if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing 
> the corresponding HBase backing table.
> Upgrade should log a warning, but continue in this case, as those indexes are 
> broken anyway.
> The problem is in 
> org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks()
> {noformat}
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=REDACTED
> ... 14 more
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:991)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:953)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1785)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1764)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:2013)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:657)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:545)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:541)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:536)
> at 
> org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:457)
> at 
> org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks(UpgradeUtil.java:1244)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:3794)
> at 
> 

[jira] [Commented] (PHOENIX-7269) Upgrade fails when HBase table for index is missing

2024-03-13 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825974#comment-17825974
 ] 

Istvan Toth commented on PHOENIX-7269:
--

Digging a bit deeper, the REDACTED HBase table exists, but the there is no 
metadata in system.catalog for it.
It is only referred from the base table syscat rows in the last (COLUMN_FAMILY) 
field of the PK, which seems to be the old way of linking data and index tables.

So maybe we should add a step where we verify that the table referred in the 
old-style link cells can be resolved, and removing those cells if they are not.

For reference, here are the dangling cells in system:catalog which refer to 
REDACTED_INDEX_TABLE, which does not exist in system.catalog (but happens to 
exist in HBase.

{noformat}
\x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:LINK_TYPE, timestamp=1663995086420, value=\x01
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_SEQ_NUM, timestamp=1663995086420, 
value=\x80\x00\x00\x00\x00\x00\x00\x00
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:TABLE_TYPE, timestamp=1663995086420, value=i
 \x00REDACTED_SCHEMA\x00REDACTED_BASE_TABLE\x00\x00REDACTED_INDEX_TABLE 
column=0:_0, timestamp=1663995086420, value=x
{noformat}


> Upgrade fails when HBase table for index is missing
> ---
>
> Key: PHOENIX-7269
> URL: https://issues.apache.org/jira/browse/PHOENIX-7269
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Reporter: Istvan Toth
>Priority: Major
>
> When attempting to upgrade the metadata during upgrade, the process is 
> aborted if Phoenix encounters indexes defined in SYSTEM.CATALOG, but missing 
> the corresponding HBase backing table.
> Upgrade should log a warning, but continue in this case, as those indexes are 
> broken anyway.
> The problem is in 
> org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks()
> {noformat}
> Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=REDACTED
> ... 14 more
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:192)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:991)
> at 
> org.apache.hadoop.hbase.client.HTable.coprocessorService(HTable.java:953)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1785)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.metaDataCoprocessorExec(ConnectionQueryServicesImpl.java:1764)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTable(ConnectionQueryServicesImpl.java:2013)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:657)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:545)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:541)
> at 
> org.apache.phoenix.schema.MetaDataClient.updateCache(MetaDataClient.java:536)
> at 
> org.apache.phoenix.util.PhoenixRuntime.getTable(PhoenixRuntime.java:457)
> at 
> org.apache.phoenix.util.UpgradeUtil.addViewIndexToParentLinks(UpgradeUtil.java:1244)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemCatalogIfRequired(ConnectionQueryServicesImpl.java:3794)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.upgradeSystemTables(ConnectionQueryServicesImpl.java:3951)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3337)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:3238)
> at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:3238)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:144)
> at 
> org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> at sqlline.DatabaseConnection.connect(DatabaseConnection.java:135)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
> at 
> sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:192)
> at sqlline.Commands.connect(Commands.java:1364)
> at sqlline.Commands.connect(Commands.java:1244)
> at 

[jira] [Commented] (PHOENIX-6651) !primarykeys cannot list t

2024-03-11 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17825212#comment-17825212
 ] 

Istvan Toth commented on PHOENIX-6651:
--

That would point to the bug being in Avatica and not Sqlline.


> !primarykeys cannot list t
> --
>
> Key: PHOENIX-6651
> URL: https://issues.apache.org/jira/browse/PHOENIX-6651
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-6.0.0
>Reporter: YangLei
>Priority: Major
>
> Hi experts.
> when I ran !primarykeys in the cmd line of sqlline-thin.py, it listed no 
> information of the table.
> while it did list primary key infomation in the cmd line of sqlline.py.
> Is this expected or a bug, or some configuration issue? Could you help to 
> check?
> 0: jdbc:phoenix:thin:url=http://localhost:876> !primarykeys FN2.GROUPCHATINFO
> ++--+-+--+--+--+
> | TABLE_CAT  | TABLE_SCHEM  | TABLE_NAME  | COLUMN_NAME  | KEY_SEQ  | PK_NAME 
>  |
> ++--+-+--+--+--+
> ++--+-+--+--+--+
>  
> +
> 0: jdbc:phoenix:> !primarykeys FN2.GROUPCHATINFO
> ++--++--+--+--+--+++--+--+--+
> | TABLE_CAT  | TABLE_SCHEM  |   TABLE_NAME   | COLUMN_NAME  | KEY_SEQ  | 
> PK_NAME  | ASC_OR_DESC  | DATA_TYPE  | TYPE_NAME  | COLUMN_SIZE  | TYPE_ID  | 
> VIEW |
> ++--++--+--+--+--+++--+--+--+
> |            | FN2          | GROUPCHATINFO  | CHATID       | 1        |      
>     | A            | 12         | VARCHAR    | null         | 12       |      
> |
> ++--++--+--+--+--+++--+--+--+
> 0: jdbc:phoenix:>
>  
> Thanks
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7255) Non-existent artifacts referred in compatible_client_versions.json

2024-03-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824741#comment-17824741
 ] 

Istvan Toth commented on PHOENIX-7255:
--

Committed to master.


> Non-existent artifacts referred in compatible_client_versions.json
> --
>
> Key: PHOENIX-7255
> URL: https://issues.apache.org/jira/browse/PHOENIX-7255
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: test
> Fix For: 5.3.0
>
>
> The compatible_client_versions.json refers to Hbase 2.3 support for Phoenix 
> 5.2, which has been removed some time ago, but the file hasn't been updated.
> We need to keep in mind that we need to update this file as new hbase 
> profiles are added or old ones are dropped.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7265) Add 5.2 versions to BackwardsCompatibilityIT once released

2024-03-07 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824633#comment-17824633
 ] 

Istvan Toth commented on PHOENIX-7265:
--

We should also add this step to release steps on the website.

> Add 5.2 versions to BackwardsCompatibilityIT once released
> --
>
> Key: PHOENIX-7265
> URL: https://issues.apache.org/jira/browse/PHOENIX-7265
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Affects Versions: 5.2.0, 5.3.0
>Reporter: Istvan Toth
>Priority: Critical
>
> This is a reminder to add the new 5.2 versions once they are released.
> We cannot add them until the 5.2.0 artifacts are avaialble publicly.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7261) Align mockito version with Hadoop and HBase in QueryServer

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824253#comment-17824253
 ] 

Istvan Toth commented on PHOENIX-7261:
--

Nit: The quoted text in the description is from Andrew, not me.

> Align mockito version with Hadoop and HBase in QueryServer
> --
>
> Key: PHOENIX-7261
> URL: https://issues.apache.org/jira/browse/PHOENIX-7261
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
>
> As mentioned in PHOENIX-6769 by [~stoty] 
> {quote}There is a well known incompatibility between old versions of 
> mockito-all and mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster.
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid.
> {quote}
>  
> Goal is to  Update mockito to 4.11.0, same as Hbase branch-3. Same was done 
> in PHOENIX-6769 for phoenix.
> Also Context on why I want this:
>  # Currently we are working on building phoenix, pqs and omid with hadoop 
> 3.3.6 and seems we fail to even start a minicluster with the mockito that is 
> bundled in code, with following error:
> {code:java}
> [ERROR] 
> org.apache.phoenix.tool.ParameterizedPhoenixCanaryToolIT.phoenixCanaryToolTest[ParameterizedPhoenixCanaryToolIT_isPositiveTestType=false,isNamespaceEnabled=false,resultSinkOption=org.apache.phoenix.tool.PhoenixCanaryTool$StdOutSink]
>  -- Time elapsed: 4.234 s <<< ERROR!
> java.lang.RuntimeException: java.lang.IncompatibleClassChangeError: class 
> org.apache.hadoop.hdfs.server.namenode.NameNodeAdapter$2 can not implement 
> org.mockito.ArgumentMatcher, because it is not an interface 
> (org.mockito.ArgumentMatcher is in unnamed module of loader 'app')
> at 
> org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:551)
> at 
> org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:450)
> at 
> org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.java:436)
> at 
> org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:518)
> at 
> org.apache.phoenix.tool.ParameterizedPhoenixCanaryToolIT.setup(ParameterizedPhoenixCanaryToolIT.java:115)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)
> at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
> at 
> org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
> at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> at org.junit.runners.Suite.runChild(Suite.java:128)
> at org.junit.runners.Suite.runChild(Suite.java:27)
> at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
> at 

[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824110#comment-17824110
 ] 

Istvan Toth commented on PHOENIX-6769:
--

If it does break them, then we may need to play with the mockito versions in 
the HBase profiles.
IIRC, the source changes were needed for the 1.0->2.0 mockito upgrade, the rest 
is probably source compatbile.

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-03-06 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17824108#comment-17824108
 ] 

Istvan Toth commented on PHOENIX-6769:
--

We still support Hbase 2.1 with Hadoop 3.0.x in 5.1.

I was not sure if it would work with those old versions.
If it doesn't break them, then go ahead, [~nihaljain.cs].

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7249) Starting HTTP Server with Python3 For website validation fails with error No module named SimpleHTTPServer

2024-03-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823475#comment-17823475
 ] 

Istvan Toth commented on PHOENIX-7249:
--

{quote}Istvan Toth those SSL changes didn't workout {quote}
Pity.
I am not sure that there IS a solution, but I was hoping that you found it.

We should really fix and update the whole site build pipeline.

> Starting HTTP Server with Python3 For website validation fails with error No 
> module named SimpleHTTPServer 
> ---
>
> Key: PHOENIX-7249
> URL: https://issues.apache.org/jira/browse/PHOENIX-7249
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Anchal Kejriwal
>Assignee: Anchal Kejriwal
>Priority: Minor
>  Labels: website
> Attachments: phoenix-website-1.patch, phoenix-website.patch
>
>
> [https://phoenix.apache.org/building_website.html]
>  * cd site/publish
>  * python -m SimpleHTTPServer 8000
> Running above  with Python 3, gives error as  {{{}No module named 
> SimpleHTTPServer{}}}. It’s because in python 3, SimpleHTTPServer has been 
> merged into {{http.server}} module. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7249) Starting HTTP Server with Python3 For website validation fails with error No module named SimpleHTTPServer

2024-03-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823443#comment-17823443
 ] 

Istvan Toth commented on PHOENIX-7249:
--

Actually, SSL issues are a known problem.

If those options let us build without having to use custom maven cache setups, 
then we want them in that script.
Can you open another ticket with those changes  [~anchalk1] ?

> Starting HTTP Server with Python3 For website validation fails with error No 
> module named SimpleHTTPServer 
> ---
>
> Key: PHOENIX-7249
> URL: https://issues.apache.org/jira/browse/PHOENIX-7249
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Anchal Kejriwal
>Assignee: Anchal Kejriwal
>Priority: Minor
>  Labels: website
> Attachments: phoenix-website-1.patch, phoenix-website.patch
>
>
> [https://phoenix.apache.org/building_website.html]
>  * cd site/publish
>  * python -m SimpleHTTPServer 8000
> Running above  with Python 3, gives error as  {{{}No module named 
> SimpleHTTPServer{}}}. It’s because in python 3, SimpleHTTPServer has been 
> merged into {{http.server}} module. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7249) Starting HTTP Server with Python3 For website validation fails with error No module named SimpleHTTPServer

2024-03-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823230#comment-17823230
 ] 

Istvan Toth commented on PHOENIX-7249:
--

I see you have also changed the maven build command.

What Java and maven versions does that command work with ?

> Starting HTTP Server with Python3 For website validation fails with error No 
> module named SimpleHTTPServer 
> ---
>
> Key: PHOENIX-7249
> URL: https://issues.apache.org/jira/browse/PHOENIX-7249
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Anchal Kejriwal
>Assignee: Anchal Kejriwal
>Priority: Minor
>  Labels: website
> Attachments: phoenix-website.patch
>
>
> [https://phoenix.apache.org/building_website.html]
>  * cd site/publish
>  * python -m SimpleHTTPServer 8000
> Running above  with Python 3, gives error as  {{{}No module named 
> SimpleHTTPServer{}}}. It’s because in python 3, SimpleHTTPServer has been 
> merged into {{http.server}} module. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7249) Generate Phoenix Website fails with error No module named SimpleHTTPServer

2024-03-04 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17823097#comment-17823097
 ] 

Istvan Toth commented on PHOENIX-7249:
--

Can you provide a patch for the page to add python3 info ?

> Generate Phoenix Website fails with error No module named SimpleHTTPServer
> --
>
> Key: PHOENIX-7249
> URL: https://issues.apache.org/jira/browse/PHOENIX-7249
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Anchal Kejriwal
>Assignee: Anchal Kejriwal
>Priority: Trivial
>
> [https://phoenix.apache.org/building_website.html]
>  * cd site/publish
>  * python -m SimpleHTTPServer 8000
> Running above  with Python 3, gives error as  {{{}No module named 
> SimpleHTTPServer{}}}. It’s because in python 3, SimpleHTTPServer has been 
> merged into {{http.server}} module. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7245) NPE in Phoenix Coproc leading to Region Server crash

2024-03-01 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822560#comment-17822560
 ] 

Istvan Toth commented on PHOENIX-7245:
--

5.1.1 is pretty old now.

Can you repro this with 5.1.3 ?


> NPE in Phoenix Coproc leading to Region Server crash
> 
>
> Key: PHOENIX-7245
> URL: https://issues.apache.org/jira/browse/PHOENIX-7245
> Project: Phoenix
>  Issue Type: Bug
>  Components: phoenix
>Affects Versions: 5.1.1
>Reporter: Ravi Kishore Valeti
>Priority: Major
>
> In our Production, while investigating Region Server crashes, we found that 
> it is due to Phoenix coproc throwing Null Pointer Exception in 
> IndexRegionObserver.postBatchMutateIndispensably() method.
> Below are the logs
> {code:java}
> 2024-02-26 13:52:40,716 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> coprocessor.CoprocessorHost - The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerExceptionjava.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:45961)at
>  org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:415)at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:102)at 
> org.apache.hadoop.hbase.ipc.RpcHandler.run(RpcHandler.java:82)
> 2024-02-26 13:52:40,725 ERROR 
> [r.default.FPBQ.Fifo.handler=216,queue=8,port=x] 
> regionserver.HRegionServer - * ABORTING region server 
> ,x,1708268161243: The coprocessor 
> org.apache.phoenix.hbase.index.IndexRegionObserver threw 
> java.lang.NullPointerException *java.lang.NullPointerExceptionat 
> org.apache.phoenix.hbase.index.IndexRegionObserver.postBatchMutateIndispensably(IndexRegionObserver.java:1301)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1028)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$30.call(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithoutResult.callObserver(CoprocessorHost.java:558)at
>  
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:631)at
>  
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1025)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion$MutationBatchOperation.doPostOpCleanupForMiniBatch(HRegion.java:4134)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutate(HRegion.java:4573)at
>  
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4447)at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:4369)at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:1033)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicBatchOp(RSRpcServices.java:951)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:916)at
>  
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2892)at
>  
> 

[jira] [Commented] (PHOENIX-7237) Pherf unit tests PherfTest and ResourceTest are failing

2024-02-29 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17822397#comment-17822397
 ] 

Istvan Toth commented on PHOENIX-7237:
--

I couldn't repro this, [~Ddupg].
What is the maven command line you use ?

> Pherf unit tests PherfTest and ResourceTest are failing
> ---
>
> Key: PHOENIX-7237
> URL: https://issues.apache.org/jira/browse/PHOENIX-7237
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Sun Xin
>Priority: Major
>
> I got same problem like PHOENIX-5776  in master and 5.1 branch.
> {code:java}
> org.apache.phoenix.pherf.exception.PherfException: Could not load resources: 
> /datamodel/alter_table_add.sql {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7241) SaltedTableMergeBucketsIT is very slow on HBase 3.0

2024-02-28 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821624#comment-17821624
 ] 

Istvan Toth commented on PHOENIX-7241:
--

I don't see this after increasing the max heap size, so this may have been 
simply due to too much GC because the heap usage was on the limit.

> SaltedTableMergeBucketsIT is very slow on HBase 3.0
> ---
>
> Key: PHOENIX-7241
> URL: https://issues.apache.org/jira/browse/PHOENIX-7241
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.3.0
>Reporter: Istvan Toth
>Priority: Minor
>
> it runs for more than 20 minutes, while it finishes in less than 10 with 
> HBase 2 (same machine)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7240) Got compile error in building phoenix query server

2024-02-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821398#comment-17821398
 ] 

Istvan Toth commented on PHOENIX-7240:
--

Yes, this is a known issue witj ZK 3.8.
We need to exclude every possible logging backend from zookeeper.


> Got compile error in building phoenix query server
> --
>
> Key: PHOENIX-7240
> URL: https://issues.apache.org/jira/browse/PHOENIX-7240
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Nikita Pande
>Priority: Major
>
>  mvn -Pbuild-with-jdk17 -DskipTests=true  *-Dzookeeper.version=3.8.3* clean 
> install
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:3.3.0:enforce 
> (banned-other-logging-framework) on project phoenix-queryserver: 
> [ERROR] Rule 0: org.apache.maven.enforcer.rules.dependency.BannedDependencies 
> failed with message:
> [ERROR] We do not allow other logging frameworks as now we use log4j2
> [ERROR] org.apache.phoenix:phoenix-queryserver:jar:6.0.1-SNAPSHOT
> [ERROR]    org.apache.hadoop:hadoop-common:jar:3.1.4
> [ERROR]       org.apache.zookeeper:zookeeper:jar:3.8.3
> [ERROR]          ch.qos.logback:logback-core:jar:1.2.10 <--- banned via the 
> exclude/include list
> [ERROR]          ch.qos.logback:logback-classic:jar:1.2.10 <--- banned via 
> the exclude/include list
>  
> However it passes with current zookeeper version *zookeeper.version=3.5.8*



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7012) Expose keystore_type parameter in sqlline-thin.py

2024-02-27 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17821334#comment-17821334
 ] 

Istvan Toth commented on PHOENIX-7012:
--

Avatica 1.24.0 includes this feature, we can merge this now.

> Expose keystore_type parameter in sqlline-thin.py
> -
>
> Key: PHOENIX-7012
> URL: https://issues.apache.org/jira/browse/PHOENIX-7012
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Add a new CLI option to easily set the new keystore_type avatica client 
> parameter for sqlline_thin.py



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7236) Fix release scripts for 5.2

2024-02-26 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820777#comment-17820777
 ] 

Istvan Toth commented on PHOENIX-7236:
--

Thanks for the reviews [~vjasani] and [~rajeshbabu].

> Fix release scripts for 5.2
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0
>
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7236) Fix release scripts for 5.2

2024-02-26 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820776#comment-17820776
 ] 

Istvan Toth commented on PHOENIX-7236:
--

Committed to master.
I didn't backport it, as we can use master to run the scripts for any release.
(but feel free to backport)

> Fix release scripts for 5.2
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.3.0
>
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7221) Manage requests-gssapi version for Phython 3.7 and lower

2024-02-26 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820736#comment-17820736
 ] 

Istvan Toth commented on PHOENIX-7221:
--

Finding a solution that works with all python/setuptool/pip versions is 
absolute sh*show.
It makes java dependency hell look like a walk in the park.

> Manage requests-gssapi version for Phython 3.7 and lower
> 
>
> Key: PHOENIX-7221
> URL: https://issues.apache.org/jira/browse/PHOENIX-7221
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
>
> The minimum Python requirement for requests-gssapi 1.3 is 3.8.
> Make sure we use 1.2.x for Python 3.7 and lower.
> We can use the existing conditional version settings for Python 2 as a 
> template.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6811) Readd phoenixdb requirements

2024-02-26 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820718#comment-17820718
 ] 

Istvan Toth commented on PHOENIX-6811:
--

According to :
https://github.com/pypa/pip/issues/25

This seems to be a bug in gssapi, which can hopefully be worked around by 
adding Cython to setup_requires in phoenixdb.

> Readd phoenixdb requirements
> 
>
> Key: PHOENIX-6811
> URL: https://issues.apache.org/jira/browse/PHOENIX-6811
> Project: Phoenix
>  Issue Type: Bug
>  Components: python
>Affects Versions: python-phoenixdb-1.2.0
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: python-phoenixdb-1.2.1
>
>
> Istvan found an error during phoenixdb 1.2.1 rc0 testing:
> python setup.py develop install Failed because with the following error:
> {code:java}
> Running gssapi-1.8.1/setup.py -q bdist_egg --dist-dir 
> /tmp/easy_install-pl7yza0i/gssapi-1.8.1/egg-dist-tmp-7b8ffcp4
> Traceback (most recent call last):
>   File 
> "/root/asd/phoenix-queryserver/python-phoenixdb/e/lib/python3.6/site-packages/setuptools/sandbox.py",
>  line 156, in save_modules
>     yield saved
>   File 
> "/root/asd/phoenix-queryserver/python-phoenixdb/e/lib/python3.6/site-packages/setuptools/sandbox.py",
>  line 198, in setup_context
>     yield
>   File 
> "/root/asd/phoenix-queryserver/python-phoenixdb/e/lib/python3.6/site-packages/setuptools/sandbox.py",
>  line 259, in run_setup
>     _execfile(setup_script, ns)
>   File 
> "/root/asd/phoenix-queryserver/python-phoenixdb/e/lib/python3.6/site-packages/setuptools/sandbox.py",
>  line 46, in _execfile
>     exec(code, globals, locals)
>   File "/tmp/easy_install-pl7yza0i/gssapi-1.8.1/setup.py", line 18, in 
> 
>     #
> ModuleNotFoundError: No module named 'Cython'{code}
> After some investigation I found out that pip install dependencies is not 
> equivalent with adding the requirements to install_requires list.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7236) Fix release scripts for 5.2

2024-02-26 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820646#comment-17820646
 ] 

Istvan Toth commented on PHOENIX-7236:
--

The Dockerfile update is taken form the current HBase release script.

The bash -x change will hopefully make future debugging easier.


> Fix release scripts for 5.2
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7236) Fix release scripts for 5.2

2024-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820608#comment-17820608
 ] 

Istvan Toth commented on PHOENIX-7236:
--

The maven in the docker image is too old for the plugins we use.

> Fix release scripts for 5.2
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7236) Fix release scripts for 5.2

2024-02-25 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17820570#comment-17820570
 ] 

Istvan Toth commented on PHOENIX-7236:
--

When using curl to access the ASF git repo, we need to specify -L to handle the 
redirect.

> Fix release scripts for 5.2
> ---
>
> Key: PHOENIX-7236
> URL: https://issues.apache.org/jira/browse/PHOENIX-7236
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> We see problems with the release scripts when trying to release 5.2.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7216) Bump Hadoop version to 3.2.4 for 2.5.x profile

2024-02-22 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819938#comment-17819938
 ] 

Istvan Toth commented on PHOENIX-7216:
--

Backported to 5.1.
Thanks for the review and help [~vjasani].
FYI [~rajeshbabu].

> Bump Hadoop version to 3.2.4 for 2.5.x profile
> --
>
> Key: PHOENIX-7216
> URL: https://issues.apache.org/jira/browse/PHOENIX-7216
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> This was discussed on the mailing list.
> We may want to bump other Hadoop versions as well, and we may want to bump 
> the 2.5 profile to Hadoop 3.3, but I want to make sure that at least this one 
> makes it into 5.2.0, while we test other updates.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7175) Set java.io.tmpdir to the maven build directory for tests

2024-02-21 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819489#comment-17819489
 ] 

Istvan Toth commented on PHOENIX-7175:
--

This is done, we have a follow-up ricket for the outstanding issues.

> Set java.io.tmpdir to the maven build directory for tests
> -
>
> Key: PHOENIX-7175
> URL: https://issues.apache.org/jira/browse/PHOENIX-7175
> Project: Phoenix
>  Issue Type: Bug
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Divneet Kaur
>Priority: Minor
>  Labels: test
> Fix For: 5.2.0, 5.1.4
>
>
> Our tests are currently using a global tmpdir.
> This cuses conflicts when running multiple test runs on the same machine.
> Set java.io.tmpdir to the build directory.
> We can copy this from HBase:
> https://github.com/apache/hbase/blob/a09305d5854fc98300426271fad3b53a69d2ae71/pom.xml#L1879



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-7222) Fix test docker image and add Python 3.12 to supported versions and the test matrix

2024-02-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818697#comment-17818697
 ] 

Istvan Toth edited comment on PHOENIX-7222 at 2/21/24 7:10 AM:
---

This explains the cause:

https://tox.wiki/en/latest/faq.html#testing-end-of-life-python-versions


was (Author: stoty):
This explain the cause:

https://tox.wiki/en/latest/faq.html#testing-end-of-life-python-versions

> Fix test docker image and add Python 3.12 to supported versions and the test 
> matrix
> ---
>
> Key: PHOENIX-7222
> URL: https://issues.apache.org/jira/browse/PHOENIX-7222
> Project: Phoenix
>  Issue Type: Task
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.1
>Reporter: Istvan Toth
>Priority: Major
>
> We only declare compatibility and run tests with up to Python 3.11.
> Add Python 3.12 to the support version list and the test environments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6898) Index tests fail with HBase 2.5

2024-02-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819090#comment-17819090
 ] 

Istvan Toth commented on PHOENIX-6898:
--

This was fixed by reverting PHOENIX-6884.

> Index tests fail with HBase 2.5
> ---
>
> Key: PHOENIX-6898
> URL: https://issues.apache.org/jira/browse/PHOENIX-6898
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Tanuj Khurana
>Priority: Blocker
>
> A lot of indexing tests fail with HBase 2.5.
> We haven't had a successful 2.5 run on master with HBase 2.5 since last 
> August.
> The last successful run on 5.1 was on Jan 26.
> [https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/5.1/]
> [https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7093) PhoenixConnection doesn't close its Statements on close()

2024-02-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17819088#comment-17819088
 ] 

Istvan Toth commented on PHOENIX-7093:
--

The fix for this was committed in the parent ticket, PHOENIX-7095, [~vjasani].


> PhoenixConnection doesn't close its Statements on close()
> -
>
> Key: PHOENIX-7093
> URL: https://issues.apache.org/jira/browse/PHOENIX-7093
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> When a PhoenixConnection is closed, its child Statements are never closed.
> This also means the ResultSets are never closed properly.
> Any open network connections eventually time out, and the objects get GCd, 
> but this is still iffy.
> It also kills any chance to get usable trace spans, as the spans are 
> generally marked done in close().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7221) Manage requests-gssapi version for Phython 3.7 and lower

2024-02-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818701#comment-17818701
 ] 

Istvan Toth commented on PHOENIX-7221:
--

We don't actually run Kerberos tests from Docker.

> Manage requests-gssapi version for Phython 3.7 and lower
> 
>
> Key: PHOENIX-7221
> URL: https://issues.apache.org/jira/browse/PHOENIX-7221
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
>
> The minimum Python requirement for requests-gssapi 1.3 is 3.8.
> Make sure we use 1.2.x for Python 3.7 and lower.
> We can use the existing conditional version settings for Python 2 as a 
> template.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7222) Fix test docker image and add Python 3.12 to supported versions and the test matrix

2024-02-20 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818697#comment-17818697
 ] 

Istvan Toth commented on PHOENIX-7222:
--

This explain the cause:

https://tox.wiki/en/latest/faq.html#testing-end-of-life-python-versions

> Fix test docker image and add Python 3.12 to supported versions and the test 
> matrix
> ---
>
> Key: PHOENIX-7222
> URL: https://issues.apache.org/jira/browse/PHOENIX-7222
> Project: Phoenix
>  Issue Type: Task
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.1
>Reporter: Istvan Toth
>Priority: Major
>
> We only declare compatibility and run tests with up to Python 3.11.
> Add Python 3.12 to the support version list and the test environments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7221) Manage requests-gssapi version for Phython 3.7 and lower

2024-02-19 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818671#comment-17818671
 ] 

Istvan Toth commented on PHOENIX-7221:
--

We cannot easily test this without fixing the test Docker image for older 
Python versions.

> Manage requests-gssapi version for Phython 3.7 and lower
> 
>
> Key: PHOENIX-7221
> URL: https://issues.apache.org/jira/browse/PHOENIX-7221
> Project: Phoenix
>  Issue Type: Bug
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
>
> The minimum Python requirement for requests-gssapi 1.3 is 3.8.
> Make sure we use 1.2.x for Python 3.7 and lower.
> We can use the existing conditional version settings for Python 2 as a 
> template.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7222) Add Python 3.12 to supported versions and the test matrix

2024-02-19 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17818667#comment-17818667
 ] 

Istvan Toth commented on PHOENIX-7222:
--

There is another problem where tests  no longer run with Python version earlier 
than 3.8 in the docker image.
This seems to be caused by the update of the base docker image. (we use -latest)

> Add Python 3.12 to supported versions and the test matrix
> -
>
> Key: PHOENIX-7222
> URL: https://issues.apache.org/jira/browse/PHOENIX-7222
> Project: Phoenix
>  Issue Type: Task
>  Components: python, queryserver
>Affects Versions: queryserver-6.0.0, python-phoenixdb-1.2.1
>Reporter: Istvan Toth
>Priority: Major
>
> We only declare compatibility and run tests with up to Python 3.11.
> Add Python 3.12 to the support version list and the test environments.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7115) Create separate handler thread pool for invalidating server metadata cache

2024-02-16 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817881#comment-17817881
 ] 

Istvan Toth commented on PHOENIX-7115:
--

I can see the same failures on master, [~shahrs87].
Could this fix be applied on the current master / 5.2 branch ?

> Create separate handler thread pool for invalidating server metadata cache
> --
>
> Key: PHOENIX-7115
> URL: https://issues.apache.org/jira/browse/PHOENIX-7115
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rushabh Shah
>Assignee: Rushabh Shah
>Priority: Major
>
> MutableIndexFailureIT#testIndexWriteFailure is failing. See 
> [this|https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-PreCommit-GitHub-PR/job/PR-1726/9/#showFailuresLink]
>  build for reference.
> Currently all the invalidateServerMetadataCache invocations are handled by 
> default RPC handler threads. We have 5 default handler threads configured in 
> tests. 
> This test makes sure that write to index table fails and since 
> disableIndexOnWriteFailure is set to true, it will disable the index. We use 
> around 10 different threads on the client side to write to base table and  
> index table, so we use all the 5 handler threads to serve writes.
> Since writes to index fails, within the handler threads it will try to update 
> the index state to  DISABLE. 
> On receiving the updateIndexState rpc, MetadataEndpointImpl will try to 
> invalidate the server metadata cache on all the regionservers. The 
> regionserver hosting index table doesn't have any available handler threads 
> to serve invalidateServerMetadataCache requests. Hence the test fails.
> The root cause is all the read/write operations and invalidate server 
> metadata cache operations share the same RPC handler pool. We need to have 
> separate thread pool for 
> invalidate server metadata cache operations



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7193) Fix cluster override for mapreduce jobs for non-ZK registries

2024-02-15 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817720#comment-17817720
 ] 

Istvan Toth commented on PHOENIX-7193:
--

Committed to master.
Waiting for  the test results for the backport.

> Fix cluster override for mapreduce jobs for non-ZK registries
> -
>
> Key: PHOENIX-7193
> URL: https://issues.apache.org/jira/browse/PHOENIX-7193
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> The Mapreduce cluster override code is heavily hardwired to ZK cluster 
> definitions.
> When I added the non-ZK registry support, I just hacked the tests so that 
> they run, but I did not add the possibility of overriding the clusters for 
> non-ZK registries.
> This came back to bite me during the HBase 3 work.
> This patch adds new MR parameters to override the Phoenix JDBC input/output 
> URLs, and cleans up some of the accumulated mess.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7191) Connectionless CQSs don't work with non-ZK registries

2024-02-13 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17817218#comment-17817218
 ] 

Istvan Toth commented on PHOENIX-7191:
--

Committed to master and 5.1.
Thanks for thre review [~vjasani].

> Connectionless CQSs don't work with non-ZK registries
> -
>
> Key: PHOENIX-7191
> URL: https://issues.apache.org/jira/browse/PHOENIX-7191
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: 5.2.0, 5.1.4
>
>
> Connectionless CQS only works with ZK.
> it is not implemented correctly for the other registries.
> This causes a lot of test failues wirh HBase 3, but also breaks some runtime 
> cases where
> queries are compiled on the server side.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6166) Make Tephra support optional for phoenix 5 connectors

2024-02-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816133#comment-17816133
 ] 

Istvan Toth commented on PHOENIX-6166:
--

This is still an issue for 5.1.

> Make Tephra support optional for phoenix 5 connectors
> -
>
> Key: PHOENIX-6166
> URL: https://issues.apache.org/jira/browse/PHOENIX-6166
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> We should add an option to trigger the same optional exclusion that 
> PHOENIX-6064 added for the core components.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Comment Edited] (PHOENIX-6938) Prepare for for the first Phoenix Connectors release

2024-02-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816026#comment-17816026
 ] 

Istvan Toth edited comment on PHOENIX-6938 at 2/9/24 11:39 AM:
---

The ZK version bump is another one of those hairy cross-version issues.
Maybe we could just drop the direct dependency, and take the transitive one 
from Phoenix ?

Maybe we should do that for a lot of other dependencies as well.


was (Author: stoty):
The ZK version bump is another one of those hairy cross-version issues.
Maybe we could just drop the direct dependency, and take the one from Phoenix ?

Maybe we should do that for a lot of other dependencies as well.

> Prepare for for the first Phoenix Connectors release
> 
>
> Key: PHOENIX-6938
> URL: https://issues.apache.org/jira/browse/PHOENIX-6938
> Project: Phoenix
>  Issue Type: Task
>  Components: connectors
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Umbrella ticket for tracking tasks that need to be done before releasing 
> connectors-6.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6938) Prepare for for the first Phoenix Connectors release

2024-02-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816026#comment-17816026
 ] 

Istvan Toth commented on PHOENIX-6938:
--

The ZK version bump is another one of those hairy cross-version issues.
Maybe we could just drop the direct dependency, and take the one from Phoenix ?

Maybe we should do that for a lot of other dependencies as well.

> Prepare for for the first Phoenix Connectors release
> 
>
> Key: PHOENIX-6938
> URL: https://issues.apache.org/jira/browse/PHOENIX-6938
> Project: Phoenix
>  Issue Type: Task
>  Components: connectors
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Umbrella ticket for tracking tasks that need to be done before releasing 
> connectors-6.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6938) Prepare for for the first Phoenix Connectors release

2024-02-09 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17816025#comment-17816025
 ] 

Istvan Toth commented on PHOENIX-6938:
--

One basic problem is that PQS is supposed to work with several phoenix 
releases, but getting the tests working with more than one release is an 
onorous process, which would require several complex profiles.
The connectors should be buildable with either 5.1 or 5.2, but we may not want 
to put in the work to make sure that the tests work with 5.1 and 5.2.

The next problem is Hive. 
HBase support in Hive 3 is very broken, and getting the connector working 
requires Hacking the hive distro. (Documented on the website)
This is all fixed in Hive 4, but that requires several minor changes to the 
code. We have these downstream, but I haven't gotten around to
upstreaming them (also, there is no Hive 4 release yet.)
This is not a release blocker, just an unfortunate fact.

Another issue is the shaded artifacts.
If we target 5.1, then the shaded artifacts are absolutely required.
For 5.2, the new mapreduce shaded artifact can be used instead, it uses the 
same shading as the shaded connectors.
For $dayjob, the preference is to keep the shaded artifacts, so that we can use 
the codebase for building connectors for 5.1.
I think most users would also prefer if we didn't drop 5.2 compatibility.

I have built and tested the connector both with Spark 3.2 and 3.4, so I don't 
expect serious problems with that.

As for the docs, having them in the source means that they always apply to the 
same version.
I agree that maintaining two copies work that we don't really have the 
throughput for.

Maybe we could just link to the github README.md pages ?


> Prepare for for the first Phoenix Connectors release
> 
>
> Key: PHOENIX-6938
> URL: https://issues.apache.org/jira/browse/PHOENIX-6938
> Project: Phoenix
>  Issue Type: Task
>  Components: connectors
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>
> Umbrella ticket for tracking tasks that need to be done before releasing 
> connectors-6.0.0



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7188) Remove Omid TTable.getTableDescriptor() calls

2024-02-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815945#comment-17815945
 ] 

Istvan Toth commented on PHOENIX-7188:
--

Committed to master and 5.1.
Thanks for the review [~rajeshbabu].

> Remove Omid TTable.getTableDescriptor() calls
> -
>
> Key: PHOENIX-7188
> URL: https://issues.apache.org/jira/browse/PHOENIX-7188
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 5.2.0, 5.1.4
>
>
> Once we have upgraded to Omid 1.1.1, replace TTable#getTableDescriptor() 
> calls with
> TTable#getHBaseTable()#getTableDescriptor().
> This is to allow for potentially building with future Omid versions, which 
> will remove TTable#getTableDescriptor().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7160) Change the TSO default port to be compatible with Omid 1.1.1

2024-02-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815940#comment-17815940
 ] 

Istvan Toth commented on PHOENIX-7160:
--

bq.  We do need to wait for the release of Omid 1.1.1, but because Omid does 
not provide a binary release program, Users are required to compile and test 
the source code themselves.

This is false logic. Just because Omid doesn't provide binary releases, it does 
not mean that releases are to be ignored.

Updating the port before the Omid used by Phoenix has been updated was a 
mistake.
There is no point in reverting now, since the update is likely to land today, 
but please keep this in mind for the future.

> Change the TSO default port to be compatible with Omid 1.1.1
> 
>
> Key: PHOENIX-7160
> URL: https://issues.apache.org/jira/browse/PHOENIX-7160
> Project: Phoenix
>  Issue Type: Bug
>  Components: omid
>Reporter: Cong Luo
>Assignee: Cong Luo
>Priority: Major
> Fix For: 5.2.0
>
>
> Since 
> [Omid-247|https://github.com/apache/phoenix-omid/commit/7d3cf3e83586bc523e20277113ecb844172cefc0]
>  has been merged, the default port has changed from 54758 to 24758. The TSO 
> configuration in the Phoenix component also needs to be updated.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-6769) Align mockito version with Hadoop and HBase

2024-02-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815931#comment-17815931
 ] 

Istvan Toth commented on PHOENIX-6769:
--

Committed to master.
Thanks for the review [~gjacoby].

> Align mockito version with Hadoop and HBase
> ---
>
> Key: PHOENIX-6769
> URL: https://issues.apache.org/jira/browse/PHOENIX-6769
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.3.0
>Reporter: Andrew Kyle Purtell
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> There is a well known incompatibility between old versions of mockito-all and 
> mockito-core and newer versions. It manifests as 
> IncompatibleClassChangeErrors and other linkage problems. The Hadoop 
> minicluster in versions 3.x embed mockito classes in the minicluster. 
> To avoid potential problems it would be best to align Phoenix use of mockito 
> (mockito-core) with downstreamers. HBase uses mockito-core 2.28.2 on 
> branch-2.4 and branch-2.5. (Phoenix is on 1.10.19.) I checked Hadoop 
> branch-3.3 and it's also on 2.28.2.
> I recently opened a PR for OMID-226 to fix the same concern in phoenix-omid. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7199) Support declaration of DEFAULT in ALTER statement

2024-02-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815602#comment-17815602
 ] 

Istvan Toth commented on PHOENIX-7199:
--

Be aware that the current default implementation in Phoenix is broken.
It does not set a default value, it just replaces any null values seen in the 
cell.

> Support declaration of DEFAULT in ALTER statement
> -
>
> Key: PHOENIX-7199
> URL: https://issues.apache.org/jira/browse/PHOENIX-7199
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Nikita Pande
>Priority: Major
>
> I cannot modify the default value of an existing column in Apache Phoenix.
> If I need to change the default value of a column, I have to create a new 
> table with the desired schema and migrate the data from the old table to the 
> new one.
> I think to modify column level DEFAULT, 
> https://issues.apache.org/jira/browse/PHOENIX-4815 needs to be fixed
> Refernce from IBM DB2
> {code:java}
> ALTER TABLE MYEMP ALTER COLUMN JOB SET DEFAULT 'PENDING'
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7205) Support DAYS operator as built in functions

2024-02-08 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815566#comment-17815566
 ] 

Istvan Toth commented on PHOENIX-7205:
--

Be aware of the complications caused by the non-standard Phoenix date handling, 
and PHOENIX-5066.

> Support DAYS operator as built in functions
> ---
>
> Key: PHOENIX-7205
> URL: https://issues.apache.org/jira/browse/PHOENIX-7205
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Nikita Pande
>Priority: Major
>
> DAYS: [https://www.ibm.com/docs/en/db2-for-zos/12?topic=functions-days]
> {*}Description{*}: The DAYS function converts each date to a number (the 
> number of days
> since '0001-01-01'), and subtracting these numbers gives the number of days 
> between
> the two dates. o/p is *364* since 2022 is not a leap year
> {*}Example{*}:
> {code:java}
> SELECT (DAYS('2022-12-31') - DAYS('2022-01-01')) AS days_difference FROM 
> sysibm.sysdummy1;
>   {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (PHOENIX-7188) Remove Omid TTable.getTableDescriptor() calls

2024-02-07 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-7188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17815549#comment-17815549
 ] 

Istvan Toth commented on PHOENIX-7188:
--

We don't need the new Omid for this.
We can simply store the HTable in Phoenix.

> Remove Omid TTable.getTableDescriptor() calls
> -
>
> Key: PHOENIX-7188
> URL: https://issues.apache.org/jira/browse/PHOENIX-7188
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0, 5.1.4
>Reporter: Istvan Toth
>Priority: Critical
>
> Once we have upgraded to Omid 1.1.1, replace TTable#getTableDescriptor() 
> calls with
> TTable#getHBaseTable()#getTableDescriptor().
> This is to allow for potentially building with future Omid versions, which 
> will remove TTable#getTableDescriptor().



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


  1   2   3   4   5   6   7   8   9   10   >