[jira] [Assigned] (PHOENIX-6981) Bump Jackson version to 2.14.1

2023-06-16 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6981:


Assignee: Krzysztof Sobolewski

> Bump Jackson version to 2.14.1
> --
>
> Key: PHOENIX-6981
> URL: https://issues.apache.org/jira/browse/PHOENIX-6981
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Krzysztof Sobolewski
>Assignee: Krzysztof Sobolewski
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> A never-ending quest to stamp out CVEs ad such.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6981) Bump Jackson version to 2.14.1

2023-06-16 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6981.
--
Fix Version/s: 5.2.0
   5.1.4
   Resolution: Fixed

Merged to master and cherry-picked to 5.1. Thanks for the patch, [~kudivuhadi] 
and welcome!

> Bump Jackson version to 2.14.1
> --
>
> Key: PHOENIX-6981
> URL: https://issues.apache.org/jira/browse/PHOENIX-6981
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Krzysztof Sobolewski
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> A never-ending quest to stamp out CVEs ad such.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6941) Remove Phoenix Flume connector

2023-04-25 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6941:
-
Description: 
The Phoenix flume connector uses an ancient version of Flume.
We do not have volunteers to maintain it.

Remove it.
If/when someone volunteers to maintain it, we can add it back later.

  was:
The Phoenix flume connector uses an ancient version of Phoenix.
We do not have volunteers to maintain it.

Remove it.
If/when someone volunteers to maintain it, we can add it back later.


> Remove Phoenix Flume connector
> --
>
> Key: PHOENIX-6941
> URL: https://issues.apache.org/jira/browse/PHOENIX-6941
> Project: Phoenix
>  Issue Type: Task
>Reporter: Istvan Toth
>Priority: Major
>
> The Phoenix flume connector uses an ancient version of Flume.
> We do not have volunteers to maintain it.
> Remove it.
> If/when someone volunteers to maintain it, we can add it back later.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6918) ScanningResultIterator should not retry when the query times out

2023-04-18 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6918.
--
Fix Version/s: 5.2.0
   5.1.4
   Resolution: Fixed

Merged to master and cherry-picked back to 5.1. Thanks for the patch, 
[~lokiore]!

> ScanningResultIterator should not retry when the query times out
> 
>
> Key: PHOENIX-6918
> URL: https://issues.apache.org/jira/browse/PHOENIX-6918
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.4
>
>
> ScanningResultIterator drops dummy results and retries Result#next() in a 
> loop as part of the Phoenix server paging feature.
> ScanningResultIterator does not check if the query has already timed out 
> currently. This means that ScanningResultIterator let the server to work on 
> the scan even though the Phoenix query is already timed out. 
> ScanningResultIterator should check if the query of the scan has been timed 
> out and if so should return an operation timeout exception as 
> BaseResultIterators does.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-4863) Setup Travis CI to automatically run all the integration tests when a PR is created on github.com/apache/phoenix

2023-02-08 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-4863.
--
Resolution: Won't Fix

Apache infra no longer supports TravisCI

> Setup Travis CI to automatically run all the integration tests when a PR is 
> created on github.com/apache/phoenix
> 
>
> Key: PHOENIX-4863
> URL: https://issues.apache.org/jira/browse/PHOENIX-4863
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Priyank Porwal
>Priority: Major
>
> Apache Tephra does this (see 
> https://travis-ci.org/apache/incubator-tephra/jobs/278449357) 
> If would be convenient if the tests are run automatically when a PR is 
> created instead of the contributor having to manually create a patch file, 
> attach it on the JIRA and click the submit button. 
> See https://docs.travis-ci.com/user/getting-started



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6865) Move CI to Apache Yetus for phoenix-omid

2023-02-06 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6865:
-
Description: 
TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch phoenix-omid to use the same Yetus CI we use in the main 
project, as suggested in PHOENIX-6145. 

  was:
TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch it to use the same Yetus CI we use in the main project, as 
suggested in PHOENIX-6145. 


> Move CI to Apache Yetus for phoenix-omid
> 
>
> Key: PHOENIX-6865
> URL: https://issues.apache.org/jira/browse/PHOENIX-6865
> Project: Phoenix
>  Issue Type: Sub-task
>  Components: omid
>Reporter: Geoffrey Jacoby
>Priority: Major
>
> TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
> must switch to some other form of CI. 
> We should switch phoenix-omid to use the same Yetus CI we use in the main 
> project, as suggested in PHOENIX-6145. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6865) Move CI to Apache Yetus for phoenix-omid

2023-02-06 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6865:


 Summary: Move CI to Apache Yetus for phoenix-omid
 Key: PHOENIX-6865
 URL: https://issues.apache.org/jira/browse/PHOENIX-6865
 Project: Phoenix
  Issue Type: Sub-task
  Components: omid
Reporter: Geoffrey Jacoby


TravisCI is being EOLed by Apache infra, so all Phoenix subprojects using it 
must switch to some other form of CI. 

We should switch it to use the same Yetus CI we use in the main project, as 
suggested in PHOENIX-6145. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6850) AlterTableWithViewsIT CreateView Props Test Flaps

2022-12-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6850:
-
Labels: beginner starter  (was: )

> AlterTableWithViewsIT CreateView Props Test Flaps
> -
>
> Key: PHOENIX-6850
> URL: https://issues.apache.org/jira/browse/PHOENIX-6850
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Geoffrey Jacoby
>Priority: Major
>  Labels: beginner, starter
>
> When running IT tests on the 5.1.3 RC1, and on the master (5.2) HEAD, I get 
> flappy behavior on 
> AlterTestsWithViewsIT.testCreateViewWithPropsMaintainsOwnProps. When a 
> particular param set is run standalone, it seems to consistently pass. 
> However, when run in concert with different param iterations, it sometimes 
> generates an NPE on
> {code:java}
> assertFalse(viewTable1.useStatsForParallelization());
> {code}
> This is because viewTable1 had previously been unset for 
> useStatsForParallelization, so it returns null if it doesn't pick up the 
> change to the base table properly.
> This seems to be a caching problem -- populating viewTable1 and viewTable2 
> from a call to PhoenixRuntime.getTableNoCache seems to fix it. 
> However, since the test updates the base table from a global connection, and 
> then tries to access views on that table from a separate tenant connection, 
> it's not obvious to me that the cache for the tenant connection _should_ be 
> expired in this situation, so I'm not sure the caching behavior counts as a 
> bug itself. 
> Interestingly though, 5.1.2 doesn't seem to have this issue. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6850) AlterTableWithViewsIT CreateView Props Test Flaps

2022-12-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6850:


 Summary: AlterTableWithViewsIT CreateView Props Test Flaps
 Key: PHOENIX-6850
 URL: https://issues.apache.org/jira/browse/PHOENIX-6850
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0, 5.1.3
Reporter: Geoffrey Jacoby


When running IT tests on the 5.1.3 RC1, and on the master (5.2) HEAD, I get 
flappy behavior on 
AlterTestsWithViewsIT.testCreateViewWithPropsMaintainsOwnProps. When a 
particular param set is run standalone, it seems to consistently pass. However, 
when run in concert with different param iterations, it sometimes generates an 
NPE on

{code:java}
assertFalse(viewTable1.useStatsForParallelization());
{code}

This is because viewTable1 had previously been unset for 
useStatsForParallelization, so it returns null if it doesn't pick up the change 
to the base table properly.

This seems to be a caching problem -- populating viewTable1 and viewTable2 from 
a call to PhoenixRuntime.getTableNoCache seems to fix it. 

However, since the test updates the base table from a global connection, and 
then tries to access views on that table from a separate tenant connection, 
it's not obvious to me that the cache for the tenant connection _should_ be 
expired in this situation, so I'm not sure the caching behavior counts as a bug 
itself. 

Interestingly though, 5.1.2 doesn't seem to have this issue. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-792) Support UPSERT SET command

2022-10-28 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-792:

Attachment: (was: 198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg)

> Support UPSERT SET command
> --
>
> Key: PHOENIX-792
> URL: https://issues.apache.org/jira/browse/PHOENIX-792
> Project: Phoenix
>  Issue Type: Task
>Reporter: James R. Taylor
>Assignee: thrylokya
>
> Support setting values in a table through a new UPSERT SET command like this:
> UPSERT my_table SET title = 'CEO'
> WHERE name = 'John Doe'
> UPSERT my_table SET pay_by_quarter = ARRAY[25000,25000,27000,27000]
> WHERE name = 'Carol';
> UPSERT my_table SET pay_by_quarter[4] = 15000
> WHERE name = 'Carol';
> This would essentially be syntactic sugar and use the same UpsertCompiler, 
> mapping to an UPSERT SELECT command that simply fills in the primary key 
> columns like this:
> UPSERT FROM my_table(name,title) 
> SELECT name,'CEO' FROM my_table
> WHERE name = 'John Doe'
> UPSERT FROM my_table(name, pay_by_quarter[4]) 
> SELECT name,15000 FROM my_table
> WHERE name = 'Carol';



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6824) Jarvis AI voice Assistant

2022-10-28 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6824.
--
Fix Version/s: (was: thirdparty-2.0.1)
 Release Note:   (was: Haiti Business Network Jarvis AI Smart Voice 
Assistant Investment Programming Sitting System On Hasbro Studio Online AI 
voice Assistant Customer Service Provider Robotics agent Connects )
   Resolution: Invalid

> Jarvis AI voice Assistant 
> --
>
> Key: PHOENIX-6824
> URL: https://issues.apache.org/jira/browse/PHOENIX-6824
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Affects Versions: thirdparty-2.0.0
> Environment: http://www.w3.org/2005/Atom; 
> xmlns:atlassian="http://streams.atlassian.com/syndication/general/1.0;>https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined  
> href="https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined;
>  rel="self"/> type="text">undefined-04002022-10-28T16:36:18.394ZApache
>  Software 
> Foundation  
> xmlns:activity="http://activitystrea.ms/spec/1.0/;>urn:uuid:e1792a3e-5ba1-324d-8a62-e2dc56357a14  type="html">a 
> href="https://issues.apache.org/jira/secure/ViewProfile.jspa?name=gotpockets121;
>  class="activity-item-user activity-item-author">Evens Max Pierrelouis 
> /a> attached one file to   a 
> href="https://issues.apache.org/jira/browse/PHOENIX-792;>PHOENIX-792 - 
> Support UPSERT SET command/a>
>   ul class="attachments activity-list">  
>  li>a 
> href="https://issues.apache.org/jira/secure/attachment/13051566/198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg;>198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg/a>
>/ul>  xmlns:usr="http://streams.atlassian.com/syndication/username/1.0;>Evens 
> Max Pierrelouis 
> https://issues.apache.org/jira/secure/ViewProfile.jspa?name=gotpockets121  xmlns:media="http://purl.org/syndication/atommedia; rel="photo" 
> href="https://issues.apache.org/jira/secure/useravatar?avatarId=39935s=16;
>  media:height="16" media:width="16"/> xmlns:media="http://purl.org/syndication/atommedia; rel="photo" 
> href="https://issues.apache.org/jira/secure/useravatar?avatarId=39935s=48;
>  media:height="48" 
> media:width="48"/>gotpockets121http://activitystrea.ms/schema/1.0/person2022-10-28T16:03:57.532Z2022-10-28T16:03:57.532Z  href="https://issues.apache.org/jira/browse/PHOENIX-792; 
> rel="alternate"/> href="https://issues.apache.org/jira/secure/viewavatar?size=xsmallavatarId=21148avatarType=issuetype;
>  rel="http://streams.atlassian.com/syndication/icon; title="Task"/> href="https://issues.apache.org/jira/s/40u070/820010/13pdxe5/1.0/_/download/resources/jira.webresources:global-static/wiki-renderer.css;
>  rel="http://streams.atlassian.com/syndication/css"/> href="https://issues.apache.org/jira/plugins/servlet/streamscomments/issues/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/reply-to"/> href="https://issues.apache.org/jira/rest/jira-activity-stream/1.0/actions/issue-watch/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/watch"/> href="https://issues.apache.org/jira/rest/jira-activity-stream/1.0/actions/issue-vote/PHOENIX-792;
>  rel="http://streams.atlassian.com/syndication/issue-vote"/> uri="https://issues.apache.org/jira"/>com.atlassian.jirahttp://activitystrea.ms/schema/1.0/posturn:uuid:d95f853a-847f-3ac8-a733-14960129bd99  type="text">198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg rel="alternate" 
> href="https://issues.apache.org/jira/secure/attachment/13051566/198416324-4ede2959-0d83-4f64-80e1-5d975d235a13.jpg"/>http://activitystrea.ms/schema/1.0/fileurn:uuid:e9641a82-d05c-3dba-b2d4-b925d52dfd14  type="text">PHOENIX-792Support UPSERT SET 
> command href="https://issues.apache.org/jira/browse/PHOENIX-792"/>http://streams.atlassian.com/syndication/types/issue-0400
>Reporter: Evens Max Pierrelouis 
>Priority: Major
>  Labels: auto-deprioritized-major
> Attachments: jarvis AI Assistant .pdf
>
>
> http://www.w3.org/2005/Atom; 
> xmlns:atlassian="http://streams.atlassian.com/syndication/general/1.0;>https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined  
> href="https://issues.apache.org/jira/activity?maxResults=10streams=user+IS+gotpockets121os_authType=basictitle=undefined;
>  rel="self"/> type="text">undefined-04002022-10-28T16:36:18.394ZApache
>  Software 
> Foundation  
> xmlns:activity="http://activitystrea.ms/spec/1.0/;>urn:uuid:e1792a3e-5ba1-324d-8a62-e2dc56357a14  type="html">a 
> href="https://issues.apache.org/jira/secure/ViewProfile.jspa?name=gotpockets121;
>  class="activity-item-user activity-item-author">Evens Max Pierrelouis 
> /a> attached one file 

[jira] [Resolved] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-10 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6806.
--
Fix Version/s: 5.2.0
   5.1.3
   Resolution: Fixed

> Protobufs don't compile on ARM-based Macs (Apple Silicon)
> -
>
> Key: PHOENIX-6806
> URL: https://issues.apache.org/jira/browse/PHOENIX-6806
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
> for an osx-aarch64 version of protoc 2.5.0
> However, unlike in the Linux case, we have a good workaround that lets us 
> keep using an official 2.5.0 binary. 
> MacOS versions that support Apple's ARM processors can run x64 code through a 
> translation layer (with a perf hit). Therefore, we can change the 
> phoenix-core pom to use the MacOS x86_64 version of protoc if it detects 
> we're running osx-aarch64. 
> Unlike running _all_ local development through an x64 JDK, which is very 
> slow, protobuf compilation isn't a big part of the build / test time, so the 
> perf hit for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6806:


Assignee: Geoffrey Jacoby

> Protobufs don't compile on ARM-based Macs (Apple Silicon)
> -
>
> Key: PHOENIX-6806
> URL: https://issues.apache.org/jira/browse/PHOENIX-6806
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
> for an osx-aarch64 version of protoc 2.5.0
> However, unlike in the Linux case, we have a good workaround that lets us 
> keep using an official 2.5.0 binary. 
> MacOS versions that support Apple's ARM processors can run x64 code through a 
> translation layer (with a perf hit). Therefore, we can change the 
> phoenix-core pom to use the MacOS x86_64 version of protoc if it detects 
> we're running osx-aarch64. 
> Unlike running _all_ local development through an x64 JDK, which is very 
> slow, protobuf compilation isn't a big part of the build / test time, so the 
> perf hit for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6806) Protobufs don't compile on ARM-based Macs (Apple Silicon)

2022-10-06 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6806:


 Summary: Protobufs don't compile on ARM-based Macs (Apple Silicon)
 Key: PHOENIX-6806
 URL: https://issues.apache.org/jira/browse/PHOENIX-6806
 Project: Phoenix
  Issue Type: Bug
Reporter: Geoffrey Jacoby


This is similar to PHOENIX-6475 for 64-bit Linux ARM. Maven will fail looking 
for an osx-aarch64 version of protoc 2.5.0

However, unlike in the Linux case, we have a good workaround that lets us keep 
using an official 2.5.0 binary. 

MacOS versions that support Apple's ARM processors can run x64 code through a 
translation layer (with a perf hit). Therefore, we can change the phoenix-core 
pom to use the MacOS x86_64 version of protoc if it detects we're running 
osx-aarch64. 

Unlike running _all_ local development through an x64 JDK, which is very slow, 
protobuf compilation isn't a big part of the build / test time, so the perf hit 
for just emulating the protobuf compilation shouldn't be too bad. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6802) HA Client Documentation

2022-10-03 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6802:


 Summary: HA Client Documentation
 Key: PHOENIX-6802
 URL: https://issues.apache.org/jira/browse/PHOENIX-6802
 Project: Phoenix
  Issue Type: Task
Reporter: Geoffrey Jacoby
 Fix For: 5.2.0


The Phoenix HA client is being released as part of Phoenix 5.2. This will need 
documentation on the Phoenix site explaining how to use it, what use cases it's 
suited for, and use cases (such as mutable tables) for which it isn't. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6740:
-
Fix Version/s: (was: 5.2.0)

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6732) PherfMainIT and DataIngestIT have failing tests

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6732:
-
Fix Version/s: (was: 5.2.0)

> PherfMainIT and DataIngestIT have failing tests
> ---
>
> Key: PHOENIX-6732
> URL: https://issues.apache.org/jira/browse/PHOENIX-6732
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Jacob Isaac
>Priority: Blocker
>
> PherfMainIT and DriverIngestIT have consistently failing IT tests, which can 
> be reproduced both locally and in Yetus. (This was shown recently in the test 
> run for PHOENIX-6554, which is a pherf improvement.)
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 69.393 s <<< FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> [ERROR] org.apache.phoenix.pherf.DataIngestIT.testColumnRulesApplied  Time 
> elapsed: 0.369 s  <<< FAILURE!
> java.lang.AssertionError: Expected 100 rows to have been inserted 
> expected:<30> but was:<31>
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testQueryTimeout  Time elapsed: 
> 15.531 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-32_detail.csv (No such file or 
> directory)
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testNoQueryTimeout  Time 
> elapsed: 9.339 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-23_detail.csv (No such file or 
> directory)



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6396) PChar illegal data exception should not contain value

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6396:
-
Fix Version/s: 5.2.0

> PChar illegal data exception should not contain value
> -
>
> Key: PHOENIX-6396
> URL: https://issues.apache.org/jira/browse/PHOENIX-6396
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 5.1.1, 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6702) ConcurrentMutationsExtendedIT and PartialIndexRebuilderIT fail on Hbase 2.4.11+

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6702:
-
Fix Version/s: (was: 5.2.0)
   (was: 5.1.3)

> ConcurrentMutationsExtendedIT and PartialIndexRebuilderIT fail on Hbase 
> 2.4.11+
> ---
>
> Key: PHOENIX-6702
> URL: https://issues.apache.org/jira/browse/PHOENIX-6702
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.2.0, 5.1.3
>Reporter: Istvan Toth
>Assignee: Kadir Ozdemir
>Priority: Blocker
> Attachments: bisect.sh
>
>
> On my local machine
> ConcurrentMutationsExtendedIT.testConcurrentUpserts failed 6 out 10 times 
> while PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild failed 10 out 
> of 10 times with HBase 2.4.11 (the default build)
>  The same tests succeeded 3 out of 3 times with HBase 2.3.7.
> Either HBase 2.4 has a bug, or our compatibility modules need to be fixed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6388) Add sampled logging for read repairs

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6388:
-
Fix Version/s: 5.2.0

> Add sampled logging for read repairs
> 
>
> Key: PHOENIX-6388
> URL: https://issues.apache.org/jira/browse/PHOENIX-6388
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 5.1.1, 4.16.1, 4.17.0, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6388) Add sampled logging for read repairs

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6388:
-
Fix Version/s: (was: 4.17.0)

> Add sampled logging for read repairs
> 
>
> Key: PHOENIX-6388
> URL: https://issues.apache.org/jira/browse/PHOENIX-6388
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 5.1.1, 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6462) Index build mapper that failed should not be logging into the PHOENIX_INDEX_TOOL_RESULT table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6462.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> Index build mapper that failed should not be logging into the 
> PHOENIX_INDEX_TOOL_RESULT table
> -
>
> Key: PHOENIX-6462
> URL: https://issues.apache.org/jira/browse/PHOENIX-6462
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> Today, if a mapper fails it still logs the region into the 
> PHOENIX_INDEX_TOOL_RESULT table. This causes into mistakenly thinking that 
> the mapper succeeded. Incremental rebuilds will be mislead.
>  
> [~swaroopa] [~tkhurana]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6462) Index build mapper that failed should not be logging into the PHOENIX_INDEX_TOOL_RESULT table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6462?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6462:


Assignee: Gokcen Iskender

> Index build mapper that failed should not be logging into the 
> PHOENIX_INDEX_TOOL_RESULT table
> -
>
> Key: PHOENIX-6462
> URL: https://issues.apache.org/jira/browse/PHOENIX-6462
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
>
> Today, if a mapper fails it still logs the region into the 
> PHOENIX_INDEX_TOOL_RESULT table. This causes into mistakenly thinking that 
> the mapper succeeded. Incremental rebuilds will be mislead.
>  
> [~swaroopa] [~tkhurana]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6485) Clean up classpath in .py scripts

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6485.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> Clean up classpath in .py scripts
> -
>
> Key: PHOENIX-6485
> URL: https://issues.apache.org/jira/browse/PHOENIX-6485
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Fix For: 5.2.0
>
>
> Clean up classpath in .py scripts and replace all phoenix-client JARs with 
> phoenix-client-embedded + log4j backend jar



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6544) Adding metadata inconsistency metric

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6544:
-
Fix Version/s: 5.2.0
   (was: 5.1.0)

> Adding metadata inconsistency metric
> 
>
> Key: PHOENIX-6544
> URL: https://issues.apache.org/jira/browse/PHOENIX-6544
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Minor
> Fix For: 4.16.1, 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5838) Add Histograms for Table level Metrics.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5838:
-
Fix Version/s: 5.2.0

> Add Histograms for  Table level Metrics.
> 
>
> Key: PHOENIX-5838
> URL: https://issues.apache.org/jira/browse/PHOENIX-5838
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: vikas meka
>Assignee: vikas meka
>Priority: Major
>  Labels: metric-collector, metrics
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6572) Add Metrics for SystemCatalog Table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6572:
-
Fix Version/s: 5.2.0

> Add Metrics for SystemCatalog Table
> ---
>
> Key: PHOENIX-6572
> URL: https://issues.apache.org/jira/browse/PHOENIX-6572
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: vikas meka
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6561) Allow pherf to intake phoenix Connection properties as argument.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6561.
--
Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)
   Resolution: Fixed

Resolving because the broken 4.x patch will never be released as 4.x is EOL. 

> Allow pherf to intake phoenix Connection properties as argument.
> 
>
> Key: PHOENIX-6561
> URL: https://issues.apache.org/jira/browse/PHOENIX-6561
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lokesh Khurana
>Assignee: Lokesh Khurana
>Priority: Minor
> Fix For: 5.2.0, 5.1.3
>
>
> Currently pherf doesn't allow connection properties to be passed as 
> arguments, it allows for some cases through scenario files, but for dynamic 
> properties selection that might not work, also for WriteWorkload no property 
> is being allowed to pass during connection creation.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6603) Create SYSTEM.TRANSFORM table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6603:
-
Fix Version/s: 5.2.0

> Create SYSTEM.TRANSFORM table
> -
>
> Key: PHOENIX-6603
> URL: https://issues.apache.org/jira/browse/PHOENIX-6603
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> SYSTEM.TRANSFORM is a table for bookkeeping the transform process



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6612) Add TransformTool

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6612:
-
Fix Version/s: 5.2.0

> Add TransformTool
> -
>
> Key: PHOENIX-6612
> URL: https://issues.apache.org/jira/browse/PHOENIX-6612
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6617) IndexRegionObserver should create mutations for the transforming table

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6617:
-
Fix Version/s: 5.2.0

> IndexRegionObserver should create mutations for the transforming table
> --
>
> Key: PHOENIX-6617
> URL: https://issues.apache.org/jira/browse/PHOENIX-6617
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6620) TransformTool should fix the unverified rows and do validation

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6620:
-
Fix Version/s: 5.2.0

> TransformTool should fix the unverified rows and do validation
> --
>
> Key: PHOENIX-6620
> URL: https://issues.apache.org/jira/browse/PHOENIX-6620
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6579) ACL check doesn't honor the namespace mapping for mapped views.

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6579:
-
Fix Version/s: 5.2.0

> ACL check doesn't honor the namespace mapping for mapped views.
> ---
>
> Key: PHOENIX-6579
> URL: https://issues.apache.org/jira/browse/PHOENIX-6579
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.2
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> When the namespace mapping and ACLs are enabled and the user tries to create 
> a view on top of the existing HBase table, the query would fail if he doesn't 
> have permissions for the default namespace. 
> {noformat}
> *Error: org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=admin/ad...@example.com, scope=default:my_ns.my_table, 
> action=[READ])
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.requireAccess(PhoenixAccessController.java:606)
>  at 
> org.apache.phoenix.coprocessor.PhoenixAccessController.preCreateTable(PhoenixAccessController.java:201)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:171)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$2.call(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost$PhoenixObserverOperation.callObserver(PhoenixMetaDataCoprocessorHost.java:86)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.execOperation(PhoenixMetaDataCoprocessorHost.java:106)
>  at 
> org.apache.phoenix.coprocessor.PhoenixMetaDataCoprocessorHost.preCreateTable(PhoenixMetaDataCoprocessorHost.java:168)
>  at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:1900)
>  at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17317)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8313)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2499)
>  at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2481)
>  at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42286)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:418)
>  at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:133)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:338)
>  at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:318) 
> (state=08000,code=101)
>  {noformat}
> That happens because in the MetaData endpoint implementation we are still 
> using _SchemaUtil.getTableNameAsBytes(schemaName, tableName)_ for the mapped 
> view which knows nothing about namespace mapping, so the ACL check is going 
> against 'default:schema.table'. It could be fixed easy by  replacing the call 
> with _SchemaUtil.getPhysicalHBaseTableName(schemaName, tableName, 
> isNamespaceMapped).getBytes();_



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6622) TransformMonitor should orchestrate transform and do retries

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6622:
-
Fix Version/s: 5.2.0

> TransformMonitor should orchestrate transform and do retries
> 
>
> Key: PHOENIX-6622
> URL: https://issues.apache.org/jira/browse/PHOENIX-6622
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6639) Read repair of a table after cutover (transform is complete and table is switched)

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6639:
-
Fix Version/s: 5.2.0

> Read repair of a table after cutover (transform is complete and table is 
> switched)
> --
>
> Key: PHOENIX-6639
> URL: https://issues.apache.org/jira/browse/PHOENIX-6639
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6659) RVC with AND clauses return incorrect result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6659?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6659:
-
Fix Version/s: 5.2.0
   5.1.3

> RVC with AND clauses return incorrect result
> 
>
> Key: PHOENIX-6659
> URL: https://issues.apache.org/jira/browse/PHOENIX-6659
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
> Attachments: Screen Shot 2022-03-01 at 1.26.44 PM.png
>
>
> CREATE TABLE DUMMY (PK1 VARCHAR NOT NULL, PK2 BIGINT NOT NULL, PK3 BIGINT NOT 
> NULL CONSTRAINT PK PRIMARY KEY (PK1,PK2,PK3));
> UPSERT INTO DUMMY VALUES ('a',0,1);
> UPSERT INTO DUMMY VALUES ('a',1,1);
> UPSERT INTO DUMMY VALUES ('a',2,1);
> UPSERT INTO DUMMY VALUES ('a',3,1);
> UPSERT INTO DUMMY VALUES ('a',3,2);
> UPSERT INTO DUMMY VALUES ('a',4,1);
>  
> {code:java}
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK1,PK2,PK3) <= ('a',3,1);
> +--+--++
> |                   PK1                    |                   PK2            
>         |                   PK3          |
> +--+--++
> +--+--++
> No rows selected (0.045 seconds)
> 0: jdbc:phoenix:localhost> explain SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK1,PK2,PK3) <= ('a',3,1);
> +--+--++
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ     |
> +--+--++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['a',*] - 
> ['a',-9187343239835811840] | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                           |
> +--+--++
> 2 rows selected (0.012 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK2,PK3) <= (3,1);
> +--+--++
> |                   PK1                    |                   PK2            
>         |                   PK3          |
> +--+--++
> | a                                        | 0                                
>         | 1                              |
> | a                                        | 1                                
>         | 1                              |
> | a                                        | 2                                
>         | 1                              |
> | a                                        | 3                                
>         | 1                              |
> +--+--++
> 4 rows selected (0.014 seconds)
> 0: jdbc:phoenix:localhost> EXPLAIN SELECT * FROM DUMMY WHERE (PK1 = 'a') AND 
> (PK2,PK3) <= (3,1);
> +--+--++
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ     |
> +--+--++
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['a',*] - 
> ['a',3] | null                             |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                           |
> +--+--++
> 2 rows selected (0.004 seconds) {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6661) Sqlline does not work on PowerPC linux

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6661:
-
Fix Version/s: 5.2.0
   queryserver-6.0.0

> Sqlline does not work on PowerPC linux
> --
>
> Key: PHOENIX-6661
> URL: https://issues.apache.org/jira/browse/PHOENIX-6661
> Project: Phoenix
>  Issue Type: Bug
>  Components: core, queryserver
> Environment: {noformat}
> # uname -a
> Linux  4.18.0-305.el8.ppc64le #1 SMP Thu Apr 29 08:53:15 
> EDT 2021 ppc64le ppc64le ppc64le GNU/Linux
> # cat /etc/redhat-release
> Red Hat Enterprise Linux release 8.4 (Ootpa)
> # java -version
> openjdk version "11.0.12" 2021-07-20 LTS
> OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, 
> sharing){noformat}
>Reporter: Abhishek Jain
>Assignee: Istvan Toth
>Priority: Major
> Fix For: queryserver-6.0.0, 5.2.0
>
>
> When trying to run phoenix-sqlline.py or phoenix-sqlline-thin.py on Linux PPC,
> we get the following exception:
> {noformat}
> Exception in thread "main" com.sun.jna.LastErrorException: [25] Inappropriate 
> ioctl for device
>     at com.sun.jna.Native.invokeVoid(Native Method)
>     at com.sun.jna.Function.invoke(Function.java:415)
>     at com.sun.jna.Function.invoke(Function.java:361)
>     at com.sun.jna.Library$Handler.invoke(Library.java:265)
>     at com.sun.proxy.$Proxy0.ioctl(Unknown Source)
>     at 
> org.jline.terminal.impl.jna.linux.LinuxNativePty.getSize(LinuxNativePty.java:95)
>     at 
> org.jline.terminal.impl.AbstractPosixTerminal.getSize(AbstractPosixTerminal.java:60)
>     at org.jline.terminal.Terminal.getWidth(Terminal.java:196)
>     at sqlline.SqlLine.getConsoleReader(SqlLine.java:594)
>     at sqlline.SqlLine.begin(SqlLine.java:511)
>     at sqlline.SqlLine.start(SqlLine.java:267)
>     at sqlline.SqlLine.main(SqlLine.java:206){noformat}
> Upgrading to the latest sqlline 1.12 will result in the sqlline.py starting 
> normally, but it will not accept any keyboard input.
> Replacing the currently used sqlline-*-jar-with-dependencies.jar JAR with the 
> plain sqlline jar, and NOT adding the JNA and JANSI terminal variants and 
> their dependencies fixes the problem.
> Doing that, however, would break or at least seriously degrade sqlline 
> functionality on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6661) Sqlline does not work on PowerPC linux

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6661:
-
Fix Version/s: queryserver-6.0.1
   (was: queryserver-6.0.0)

> Sqlline does not work on PowerPC linux
> --
>
> Key: PHOENIX-6661
> URL: https://issues.apache.org/jira/browse/PHOENIX-6661
> Project: Phoenix
>  Issue Type: Bug
>  Components: core, queryserver
> Environment: {noformat}
> # uname -a
> Linux  4.18.0-305.el8.ppc64le #1 SMP Thu Apr 29 08:53:15 
> EDT 2021 ppc64le ppc64le ppc64le GNU/Linux
> # cat /etc/redhat-release
> Red Hat Enterprise Linux release 8.4 (Ootpa)
> # java -version
> openjdk version "11.0.12" 2021-07-20 LTS
> OpenJDK Runtime Environment 18.9 (build 11.0.12+7-LTS)
> OpenJDK 64-Bit Server VM 18.9 (build 11.0.12+7-LTS, mixed mode, 
> sharing){noformat}
>Reporter: Abhishek Jain
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0, queryserver-6.0.1
>
>
> When trying to run phoenix-sqlline.py or phoenix-sqlline-thin.py on Linux PPC,
> we get the following exception:
> {noformat}
> Exception in thread "main" com.sun.jna.LastErrorException: [25] Inappropriate 
> ioctl for device
>     at com.sun.jna.Native.invokeVoid(Native Method)
>     at com.sun.jna.Function.invoke(Function.java:415)
>     at com.sun.jna.Function.invoke(Function.java:361)
>     at com.sun.jna.Library$Handler.invoke(Library.java:265)
>     at com.sun.proxy.$Proxy0.ioctl(Unknown Source)
>     at 
> org.jline.terminal.impl.jna.linux.LinuxNativePty.getSize(LinuxNativePty.java:95)
>     at 
> org.jline.terminal.impl.AbstractPosixTerminal.getSize(AbstractPosixTerminal.java:60)
>     at org.jline.terminal.Terminal.getWidth(Terminal.java:196)
>     at sqlline.SqlLine.getConsoleReader(SqlLine.java:594)
>     at sqlline.SqlLine.begin(SqlLine.java:511)
>     at sqlline.SqlLine.start(SqlLine.java:267)
>     at sqlline.SqlLine.main(SqlLine.java:206){noformat}
> Upgrading to the latest sqlline 1.12 will result in the sqlline.py starting 
> normally, but it will not accept any keyboard input.
> Replacing the currently used sqlline-*-jar-with-dependencies.jar JAR with the 
> plain sqlline jar, and NOT adding the JNA and JANSI terminal variants and 
> their dependencies fixes the problem.
> Doing that, however, would break or at least seriously degrade sqlline 
> functionality on Windows.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6662) Failed to delete rows when PK has one or more DESC column with IN clause

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6662:
-
Fix Version/s: 5.1.3

> Failed to delete rows when PK has one or more DESC column with IN clause
> 
>
> Key: PHOENIX-6662
> URL: https://issues.apache.org/jira/browse/PHOENIX-6662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> Global connection to create a base table and view.
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY.BASE (TETNANT_ID CHAR(15) NOT NULL, PREFIX 
> CHAR(3) NOT NULL, COL1 DATE, COL2 CHAR(15), COL3 DATE, COL4 CHAR(15), COL5 
> DATE CONSTRAINT PK PRIMARY KEY ( TETNANT_ID, PREFIX ) ) MULTI_TENANT=true;
> CREATE VIEW IF NOT EXISTS DUMMY.GLOBAL_VIEW  (PK1 DECIMAL(12, 3) NOT NULL, 
> PK2 BIGINT NOT NULL, COL6 CHAR(15) , COL7 DATE, COL8 BOOLEAN, COL9 CHAR(15), 
> COL10 VARCHAR, COL11 VARCHAR CONSTRAINT PKVIEW PRIMARY KEY (PK1 DESC, PK2)) 
> AS SELECT * FROM DUMMY.BASE WHERE PREFIX = '01A'; {code}
> Tenant connection to create a view and repro the issue
> {code:java}
> 0: jdbc:phoenix:localhost> CREATE VIEW DUMMY."0ph" AS SELECT * FROM 
> DUMMY.GLOBAL_VIEW;
> No rows affected (0.055 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (10.0,10);
> 1 row affected (0.038 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (20.0,20);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 2E+1         | 20              
>                          |                 | null                    |        
>                                   |                 |                      |
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 1E+1         | 10              
>                          |                 | null                    |        
>                                   |                 |                      |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> 2 rows selected (0.035 seconds)
> 0: jdbc:phoenix:localhost> DELETE FROM DUMMY."0ph" WHERE (PK1,PK2) IN 
> ((10.0,10),(20.0,20));
> No rows affected (0.024 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null         

[jira] [Updated] (PHOENIX-6662) Failed to delete rows when PK has one or more DESC column with IN clause

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6662:
-
Fix Version/s: 5.2.0

> Failed to delete rows when PK has one or more DESC column with IN clause
> 
>
> Key: PHOENIX-6662
> URL: https://issues.apache.org/jira/browse/PHOENIX-6662
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Critical
> Fix For: 5.2.0
>
>
> Global connection to create a base table and view.
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY.BASE (TETNANT_ID CHAR(15) NOT NULL, PREFIX 
> CHAR(3) NOT NULL, COL1 DATE, COL2 CHAR(15), COL3 DATE, COL4 CHAR(15), COL5 
> DATE CONSTRAINT PK PRIMARY KEY ( TETNANT_ID, PREFIX ) ) MULTI_TENANT=true;
> CREATE VIEW IF NOT EXISTS DUMMY.GLOBAL_VIEW  (PK1 DECIMAL(12, 3) NOT NULL, 
> PK2 BIGINT NOT NULL, COL6 CHAR(15) , COL7 DATE, COL8 BOOLEAN, COL9 CHAR(15), 
> COL10 VARCHAR, COL11 VARCHAR CONSTRAINT PKVIEW PRIMARY KEY (PK1 DESC, PK2)) 
> AS SELECT * FROM DUMMY.BASE WHERE PREFIX = '01A'; {code}
> Tenant connection to create a view and repro the issue
> {code:java}
> 0: jdbc:phoenix:localhost> CREATE VIEW DUMMY."0ph" AS SELECT * FROM 
> DUMMY.GLOBAL_VIEW;
> No rows affected (0.055 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (10.0,10);
> 1 row affected (0.038 seconds)
> 0: jdbc:phoenix:localhost> UPSERT INTO DUMMY."0ph" (PK1,PK2) VALUES (20.0,20);
> 1 row affected (0.008 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 2E+1         | 20              
>                          |                 | null                    |        
>                                   |                 |                      |
> | 01A    | null                    |                 | null                   
>  |                 | null                    | 1E+1         | 10              
>                          |                 | null                    |        
>                                   |                 |                      |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> 2 rows selected (0.035 seconds)
> 0: jdbc:phoenix:localhost> DELETE FROM DUMMY."0ph" WHERE (PK1,PK2) IN 
> ((10.0,10),(20.0,20));
> No rows affected (0.024 seconds)
> 0: jdbc:phoenix:localhost> SELECT * FROM DUMMY."0ph";
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | PREFIX |          COL1           |      COL2       |          COL3          
>  |      COL4       |          COL5           |     PK1      |                 
>   PK2                    |      COL6       |          COL7           |        
>            COL8                   |      COL9       |                  COL |
> ++-+-+-+-+-+--+--+-+-+--+-+--+
> | 01A    | null                    |                 | null                   
>  |                 | null                

[jira] [Resolved] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6649.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6649) TransformTool should transform the tenant view content as well

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6649:


Assignee: Gokcen Iskender

> TransformTool should transform the tenant view content as well
> --
>
> Key: PHOENIX-6649
> URL: https://issues.apache.org/jira/browse/PHOENIX-6649
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Gokcen Iskender
>Assignee: Gokcen Iskender
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6669) RVC returns a wrong result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6669:


Assignee: Gokcen Iskender

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 5.2.0
>
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6669) RVC returns a wrong result

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6669.
--
Fix Version/s: 5.2.0
   Resolution: Fixed

This was merged awhile back but the JIRA was never resolved

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Priority: Major
> Fix For: 5.2.0
>
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6682) Jenkins tests are failing for Java 11.0.14.1

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6682:
-
Fix Version/s: 5.2.0

> Jenkins tests are failing for Java 11.0.14.1
> 
>
> Key: PHOENIX-6682
> URL: https://issues.apache.org/jira/browse/PHOENIX-6682
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
>  Labels: ci, test
> Fix For: 5.2.0
>
>
> Jenkins tests are failing because the Jetty versions used by some Hadoop 
> versions cannot handle the fourth version component.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6774) Enable code coverage reporting to SonarQube in Phoenix

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6774?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6774:
-
Fix Version/s: 5.2.0

> Enable code coverage reporting to SonarQube in Phoenix
> --
>
> Key: PHOENIX-6774
> URL: https://issues.apache.org/jira/browse/PHOENIX-6774
> Project: Phoenix
>  Issue Type: Task
>Reporter: Dóra Horváth
>Assignee: Dóra Horváth
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 5.2.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6779) Account for connection attempted & failure metrics in all paths

2022-10-03 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6779:
-
Fix Version/s: 5.2.0

> Account for connection attempted & failure metrics in all paths
> ---
>
> Key: PHOENIX-6779
> URL: https://issues.apache.org/jira/browse/PHOENIX-6779
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.2
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> PHOENIX-6564 added some additional connection metrics.  These need to be 
> moved up higher in the stack closer to phoenix driver.create path as well as 
> the attempted metric.  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5140) TableNotFoundException occurs when we create local asynchronous index

2022-09-30 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5140:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> TableNotFoundException occurs when we create local asynchronous index
> -
>
> Key: PHOENIX-5140
> URL: https://issues.apache.org/jira/browse/PHOENIX-5140
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: > HDP : 3.0.0.0, HBase : 2.0.0,phoenix : 5.0.0 and 
> hadoop : 3.1.0
>Reporter: MariaCarrie
>Assignee: dan zheng
>Priority: Major
>  Labels: IndexTool, localIndex, tableUndefined
> Attachments: PHOENIX-5140-master-v1.patch, 
> PHOENIX-5140-master-v2.patch
>
>   Original Estimate: 48h
>  Time Spent: 20m
>  Remaining Estimate: 47h 40m
>
> First I create the table and insert the data:
> ^create table DMP.DMP_INDEX_TEST2 (id varchar not null primary key,name 
> varchar,age varchar);^
> ^upsert into DMP.DMP_INDEX_TEST2 values('id01','name01','age01');^
> The asynchronous index is then created:
> ^create local index if not exists TMP_INDEX_DMP_TEST2 on DMP.DMP_INDEX_TEST2 
> (name) ASYNC;^
> Because kerberos is enabled,So I need kinit HBase principal first,Then 
> execute the following command:
> ^HADOOP_CLASSPATH="/etc/hbase/conf" hadoop jar 
> /usr/hdp/3.0.0.0-1634/phoenix/phoenix-client.jar 
> org.apache.phoenix.mapreduce.index.IndexTool --schema DMP --data-table 
> DMP_INDEX_TEST2 --index-table TMP_INDEX_DMP_TEST2 --output-path 
> /hbase-backup2^
> But I got the following error:
> ^Error: java.lang.RuntimeException: 
> org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table 
> undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:124)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:50)^
> ^at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)^
> ^at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)^
> ^at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)^
> ^at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:174)^
> ^at java.security.AccessController.doPrivileged(Native Method)^
> ^at javax.security.auth.Subject.doAs(Subject.java:422)^
> ^at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1688)^
> ^at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:168)^
> ^Caused by: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 
> (42M03): Table undefined. tableName=DMP.DMP_INDEX_TEST2^
> ^at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableRegionLocation(ConnectionQueryServicesImpl.java:4544)^
> ^at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.getTableRegionLocation(DelegateConnectionQueryServices.java:312)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.setValues(UpsertCompiler.java:163)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler.access$500(UpsertCompiler.java:118)^
> ^at 
> org.apache.phoenix.compile.UpsertCompiler$UpsertValuesMutationPlan.execute(UpsertCompiler.java:1202)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)^
> ^at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:173)^
> ^at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:183)^
> ^at 
> org.apache.phoenix.mapreduce.index.PhoenixIndexImportMapper.map(PhoenixIndexImportMapper.java:103)^
> ^... 9 more^
> I can query this table and have access to it,It works well:
> ^select * from DMP.DMP_INDEX_TEST2;^
> ^select * from DMP.TMP_INDEX_DMP_TEST2;^
> ^drop table DMP.DMP_INDEX_TEST2;^
> But why did my MR task make this mistake? Any Suggestions from anyone?



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-09-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6751.
--
Release Note: Adds a new config parameter, 
phoenix.max.inList.skipScan.size, which controls the size of an IN clause 
before it will be automatically converted from a skip scan to a range scan. 
  Resolution: Fixed

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0, 5.1.3
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6749) Replace deprecated HBase 1.x API calls

2022-09-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6749.
--
Resolution: Fixed

Merged into the master branch. Thanks for this contribution, [~ameszaros]!

> Replace deprecated HBase 1.x API calls
> --
>
> Key: PHOENIX-6749
> URL: https://issues.apache.org/jira/browse/PHOENIX-6749
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0
>
>
> Now that we no longer care about Hbase 1.x compatibility, we should replace 
> the deprecated Hbase 1.x API calls with HBase 2 API calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6791) WHERE optimizer redesign

2022-09-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6791:
-
Fix Version/s: 5.3.0

> WHERE optimizer redesign
> 
>
> Key: PHOENIX-6791
> URL: https://issues.apache.org/jira/browse/PHOENIX-6791
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir Ozdemir
>Priority: Major
> Fix For: 5.3.0
>
>
> The WHERE optimizer in Phoenix derives the information about which row key 
> ranges to be scanned from the primary key (PK) column expressions in a where 
> clause. These key ranges are then used to determine the table regions to scan 
> and generate a SkipScanFilter for each of these scans if applicable. 
> The WHERE expression may include non-PK column (sub) expressions. After 
> identifying the key ranges, the WHERE optimizer removes the nodes for PK 
> columns from the expression tree if these nodes are fully used to determine 
> the key ranges.
> Since the values in the WHERE expression are expressed by byte arrays, the 
> key ranges are also expressed using byte arrays. KeyRange represents a range 
> for a row key or any sub part of a row key key. A key range is composed of 
> two pairs, one for each end of the range, lower and upper. The pair is formed 
> from a byte array and a boolean value. The boolean value indicates if the end 
> of the range specified by the byte array is inclusive or not. If the byte 
> array is empty, it means that the corresponding end of the range is 
> unbounded. 
> KeySlot represents a key part and the list of key ranges for this key part 
> where a key part can be any sub part of a PK, including leading, trailing, or 
> middle part of the key. The number of columns in a key part is called span. 
> For the terminal nodes (i..e, constant values) in the expression tree, 
> KeySlot objects are created with a single key range. When KeySlot objects are 
> rolled up in the expression tree, they can have multiple ranges. For example, 
> a KeySlot object representing an IN expression will have a separate range for 
> each member of the IN expression. Similarly the KeySlot object for an OR 
> expression can have multiple ranges similarly. Please note an IN operator can 
> be replaced by an equivalent OR expression. 
> When the WHERE optimizer visits the nodes of the expression tree, it 
> generates a KeySlots object. KeySlots is essentially a list of KeySlot 
> objects (please note the difference between KeySlots vs KeySlot). There are 
> two types of KeySlots: SingleKeySlot and MultiKeySlot. SingleKeySlot 
> represents a single key slot whereas MultiKeySlot is a list of key slots the 
> results of AND expression on SingleKeySlot or MultiKeySlot objects. 
> The key slots are rolled into a MultiKeySlot object when processing an AND 
> expression. The AND operation on two key slots starting their spans with the 
> same PK columns is equivalent to taking intersection of their ranges. The OR 
> operation implementation is limited and rather simple compared to the AND 
> operation. The OR operation attempts to coalesce key slots if all of the key 
> slots have the same starting PK column. If not, it generates a null KeySlots. 
> When an expression node is used fully in generating a key slot, this 
> expression node is removed from the expression tree.
> A row key for a given table can be composed of several PK columns. Without 
> any restrictions imposed by predefined rules, intersection of key slots can 
> lead to a large number of key slots, i.e., key ranges.  For example, consider 
> a row key composed of three integer columns, PK1, PK2, and PK3, and the 
> expression (PK1,  PK2) > (100, 25) AND PK3 = 5. The result would be a very 
> large number of key slots and each key slot represents a point in the three 
> dimensional space, including (100, 26, 5), (100, 27, 5), …, (100, 2147483647, 
> 5), (101, 1, 5), (101, 2, 5), … .
> A simple expression (like the one given above) with a relatively small number 
> of PK columns and a simple data type, e.g., integer, is sufficient to show 
> that finding key ranges for an arbitrary expression is an intractable 
> problem. Attempting to optimize the queries by enumerating the key ranges can 
> lead to excessive memory allocation and long computation times and the 
> optimization can defeat its purpose. 
> The current implementation attempts to enumerate all possible key ranges in 
> general. Because of this, the WHERE optimizer has caused out of memory 
> issues, and query timeouts due to high CPU usage. The very recent bug fixes 
> attempts to catch these cases and prevent them. However, these fixes do not 
> attempt to cover all cases and are formulated based on known cases.
> In addition to inefficient resource utilization, there are known types of 
> expressions, the current 

[jira] [Assigned] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6785:


Assignee: Andrew Kyle Purtell

> Sequence Performance Optimizations
> --
>
> Key: PHOENIX-6785
> URL: https://issues.apache.org/jira/browse/PHOENIX-6785
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Andrew Kyle Purtell
>Priority: Major
> Fix For: 5.3.0
>
> Attachments: Sequence Architecture and Perf Improvements.pdf
>
>
> We've encountered scaling issues with Phoenix sequences in our production 
> environment, particularly with heavy usage of the same sequence causing 
> hotspotting on the physical SYSTEM.SEQUENCE table. 
> After some informal discussions on this with [~kadir], [~jisaac] and 
> [~tkhurana], I wrote up some thoughts on improvements that could be made to 
> sequences in a future Phoenix release. I'll attach it to this JIRA.
> As there are several proposed improvements, this will be an umbrella JIRA to 
> hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6715) Update Omid to 1.1.0

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6715:
-
Fix Version/s: 5.2.0

> Update Omid to 1.1.0
> 
>
> Key: PHOENIX-6715
> URL: https://issues.apache.org/jira/browse/PHOENIX-6715
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> We should release Omid 1.1.0 , and update Phoenix to it for 5.2



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6749) Replace deprecated HBase 1.x API calls

2022-09-13 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6749:
-
Fix Version/s: 5.2.0

> Replace deprecated HBase 1.x API calls
> --
>
> Key: PHOENIX-6749
> URL: https://issues.apache.org/jira/browse/PHOENIX-6749
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors, core, queryserver
>Reporter: Istvan Toth
>Assignee: Aron Attila Meszaros
>Priority: Major
> Fix For: 5.2.0
>
>
> Now that we no longer care about Hbase 1.x compatibility, we should replace 
> the deprecated Hbase 1.x API calls with HBase 2 API calls.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6788) Client-side Sequence Update Consolidation

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6788:


 Summary: Client-side Sequence Update Consolidation
 Key: PHOENIX-6788
 URL: https://issues.apache.org/jira/browse/PHOENIX-6788
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


This is similar to the proposed PHOENIX-6787, but for the client-side. If two 
requests for the same sequence are enqueued at a client, the client can 
consolidate them into one larger request, and then satisfy them both with the 
combined value returned from them. 

Because this optimization can change the order which operations are assigned 
sequence ids, it should be configurable with a feature flag.

As with PHOENIX-6787, if the consolidation of requests would result in a 
validation error (like an overflow or underflow) that wouldn't happen to some 
requests if issued separately, we should not consolidate. If an overflow or 
underflow validation error comes from the server-side, we should retry without 
consolidating. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6787) Server-side Sequence Update Consolidation

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6787:


 Summary: Server-side Sequence Update Consolidation
 Key: PHOENIX-6787
 URL: https://issues.apache.org/jira/browse/PHOENIX-6787
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


For secondary indexes, we have optimizations so that if multiple mutations are 
waiting on the same row lock, all subsequent mutations can re-use the previous 
mutation's final state and avoid an extra Get. 

We can apply a similar idea to Phoenix sequences. If there's a "hot" sequence 
with multiple requests queueing for a Sequence row lock, we can consolidate 
them down to one set of Get / Put operations, then satisfy them all. This 
change is transparent to the clients. 

Note that if this consolidation would cause the sequence update to fail when 
some of the requests would have succeeded otherwise, we should not consolidate. 
(An example is if a sequence has cycling disabled, and the first request would 
not overflow, but the first and second combined would. In this case we should 
let the first request go through unconsolidated, and fail the second request.) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6786) SequenceRegionObserver should use batch mutation coproc hooks

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6786:


 Summary: SequenceRegionObserver should use batch mutation coproc 
hooks
 Key: PHOENIX-6786
 URL: https://issues.apache.org/jira/browse/PHOENIX-6786
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Geoffrey Jacoby


SequenceRegionObserver uses preIncrement but could use the standard batch 
mutation coproc hooks, similarly to how atomic upserts work after PHOENIX-6387. 
This will simplify the code and also make it easier to re-use code from 
secondary index generation in performance optimizations. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-08 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6785:
-
Attachment: Sequence Architecture and Perf Improvements.pdf

> Sequence Performance Optimizations
> --
>
> Key: PHOENIX-6785
> URL: https://issues.apache.org/jira/browse/PHOENIX-6785
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.3.0
>
> Attachments: Sequence Architecture and Perf Improvements.pdf
>
>
> We've encountered scaling issues with Phoenix sequences in our production 
> environment, particularly with heavy usage of the same sequence causing 
> hotspotting on the physical SYSTEM.SEQUENCE table. 
> After some informal discussions on this with [~kadir], [~jisaac] and 
> [~tkhurana], I wrote up some thoughts on improvements that could be made to 
> sequences in a future Phoenix release. I'll attach it to this JIRA.
> As there are several proposed improvements, this will be an umbrella JIRA to 
> hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (PHOENIX-6785) Sequence Performance Optimizations

2022-09-08 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6785:


 Summary: Sequence Performance Optimizations
 Key: PHOENIX-6785
 URL: https://issues.apache.org/jira/browse/PHOENIX-6785
 Project: Phoenix
  Issue Type: Improvement
Reporter: Geoffrey Jacoby
 Fix For: 5.3.0


We've encountered scaling issues with Phoenix sequences in our production 
environment, particularly with heavy usage of the same sequence causing 
hotspotting on the physical SYSTEM.SEQUENCE table. 

After some informal discussions on this with [~kadir], [~jisaac] and 
[~tkhurana], I wrote up some thoughts on improvements that could be made to 
sequences in a future Phoenix release. I'll attach it to this JIRA.

As there are several proposed improvements, this will be an umbrella JIRA to 
hold several subtasks. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6740.
--
Resolution: Duplicate

This is incorporated in PHOENIX-6692. 

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6752:
-
Priority: Critical  (was: Major)

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6752) Duplicate expression nodes in extract nodes during WHERE compilation phase leads to poor performance.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6752:
-
Fix Version/s: 5.2.0

> Duplicate expression nodes in extract nodes during WHERE compilation phase 
> leads to poor performance.
> -
>
> Key: PHOENIX-6752
> URL: https://issues.apache.org/jira/browse/PHOENIX-6752
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0, 4.16.1, 5.2.0
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: test-case.txt
>
>
> SQL queries using the OR operator were taking a long time during the WHERE 
> clause compilation phase when a large number of OR clauses (~50k) are used.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5215) Remove and replace HTrace

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5215:
-
Fix Version/s: 5.3.0
   (was: 5.2.0)
   (was: 5.1.3)

> Remove and replace HTrace
> -
>
> Key: PHOENIX-5215
> URL: https://issues.apache.org/jira/browse/PHOENIX-5215
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Andrew Kyle Purtell
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 5.3.0
>
>
> HTrace is dead.
> Hadoop is discussing a replacement of HTrace with OpenTracing, see 
> HADOOP-15566 
> HBase is having the same discussion on HBASE-22120



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6687) The region server hosting the SYSTEM.CATALOG fails to serve any metadata requests as default handler pool threads are exhausted.

2022-08-29 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6687.
--
Fix Version/s: (was: 4.17.0)
   Resolution: Fixed

Resolving as this was merged to master a couple of months ago. 

> The region server hosting the SYSTEM.CATALOG fails to serve any metadata 
> requests as default handler pool  threads are exhausted.
> -
>
> Key: PHOENIX-6687
> URL: https://issues.apache.org/jira/browse/PHOENIX-6687
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 5.1.1, 4.16.1, 5.2.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: stacktraces.txt
>
>
> When SYSTEM.CATALOG region server is restarted and the server is experiencing 
> heavy metadata call volume.
> The stack traces indicate that all the default handler pool threads are 
> waiting for the CQSI.init thread to finish initializing.
> The CQSI.init thread itself cannot proceed since it cannot complete the 
> second RPC call 
> (org.apache.phoenix.query.ConnectionQueryServicesImpl.checkClientServerCompatibility)
>  due to thread starvation.
> For e.g
> The following 
> [code|https://github.com/apache/phoenix/blob/3cff97087d79b85e282fca4ac69ddf499fb1f40f/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L661]
>  turned the getTable(..) into needing an additional server-to-server RPC call 
> when initializing a PhoenixConnection (CQSI.init) for the first time on the 
> JVM. 
> It is well-known that server-to-server RPC calls are prone to deadlocking due 
> to thread pool exhaustion.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6751:
-
Fix Version/s: 5.2.0

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
> Fix For: 5.2.0
>
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-6751) Force using range scan vs skip scan when using the IN operator and large number of RVC elements

2022-07-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6751:
-
Priority: Critical  (was: Major)

> Force using range scan vs skip scan when using the IN operator and large 
> number of RVC elements 
> 
>
> Key: PHOENIX-6751
> URL: https://issues.apache.org/jira/browse/PHOENIX-6751
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.1, 4.16.0, 5.2.0
>Reporter: Jacob Isaac
>Priority: Critical
>
> SQL queries using the IN operator using PKs of different SortOrder were 
> failing during the WHERE clause compilation phase and causing OOM issues on 
> the servers when a large number (~50k) of RVC elements were used in the IN 
> operator.
> SQL queries were failing specifically during the skip scan filter generation. 
> The skip scan filter is generated using a list of point key ranges. 
> [ScanRanges.create|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L80]
> The following getPointKeys 
> [code|https://git.soma.salesforce.com/bigdata-packaging/phoenix/blob/e0737e0ea7ba7501e78fe23c16e7abca27bfd944/phoenix-core/src/main/java/org/apache/phoenix/compile/ScanRanges.java#L541]
>  uses the KeyRange sets to create a new list of point-keys. When there are a 
> large number of RVC elements the above



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6733.
--
Fix Version/s: 5.1.3
   Resolution: Fixed

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0, 5.1.3
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Reopened] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reopened PHOENIX-6733:
--

Reopening since this can't be cherry-picked back to 5.1 trivially (because the 
functionality moved from CompatUtil to BaseTest in 5.2 after we dropped support 
for HBase 2.1 and 2.2).

I'll create a new PR for 5.1 so it can get another test run. 

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6733.
--
Resolution: Fixed

Tests are passing now; merged to master. 

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-6733) Ref count leaked test failures

2022-07-19 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-6733:


Assignee: Geoffrey Jacoby

> Ref count leaked test failures
> --
>
> Key: PHOENIX-6733
> URL: https://issues.apache.org/jira/browse/PHOENIX-6733
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.2.0
>
>
> In pretty much every recent Yetus test run, some tests have flapped in the 
> AfterClass teardown logic which tries to check for HBase Store reference 
> resource leaks. The error message is "Ref count leaked", and some common 
> suites this happens to are:
> DateTimeIT
> InListIT
> SequenceIT
> IndexToolForDeleteBeforeRebuildIT
> SpooledTmpFileDeleteIT
> I haven't had much luck trying to reproduce this locally. It's also not clear 
> yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
> Phoenix one, is the bug with something in Phoenix or with the resource 
> check?) 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (PHOENIX-5404) Move check to client side to see if there are any child views that need to be dropped while receating a table/view

2022-07-05 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5404:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Move check to client side to see if there are any child views that need to be 
> dropped while receating a table/view
> --
>
> Key: PHOENIX-5404
> URL: https://issues.apache.org/jira/browse/PHOENIX-5404
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 5.3.0
>
>
> Remove {{ViewUtil.dropChildViews(env, tenantIdBytes, schemaName, 
> tableName);}} call in MetdataEndpointImpl.createTable
> While creating a table or view we need to ensure that are not any child views 
> that haven't been clean up by the DropChildView task yet. Move this check to 
> the client (issue a scan against SYSTEM.CHILD_LINK to see if a single linking 
> row exists).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (PHOENIX-5686) MetaDataUtil#isLocalIndex returns incorrect results

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-5686:


Assignee: Geoffrey Jacoby

> MetaDataUtil#isLocalIndex returns incorrect results
> ---
>
> Key: PHOENIX-5686
> URL: https://issues.apache.org/jira/browse/PHOENIX-5686
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Swaroopa Kadam
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> isLocalIndex function in MetaDataUtil uses 
> "_LOCAL_IDX_" to check if the index is a local index. It would be good to 
> modify the method to use correct logic (get rid of the old and unused code) 
> and use the method call wherever needed. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5066:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Jaanai Zhang
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 5.3.0
>
> Attachments: DateTest.java, PHOENIX-5066.4x.v1.patch, 
> PHOENIX-5066.4x.v2.patch, PHOENIX-5066.4x.v3.patch, 
> PHOENIX-5066.master.v1.patch, PHOENIX-5066.master.v2.patch, 
> PHOENIX-5066.master.v3.patch, PHOENIX-5066.master.v4.patch, 
> PHOENIX-5066.master.v5.patch, PHOENIX-5066.master.v6.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5283) Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5283:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add CASCADE INDEX ALL in the SQL Grammar of ALTER TABLE ADD 
> 
>
> Key: PHOENIX-5283
> URL: https://issues.apache.org/jira/browse/PHOENIX-5283
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5283.4.x-hbase-1.3.v1.patch
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Include following support in the grammar. 
> ALTER TABLE ADD CASCADE <(comma separated list of indexes) | ALL > IF NOT 
> EXISTS  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-4846) WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort order of pk columns being filtered on changes

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-4846:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort 
> order of pk columns being filtered on changes
> ---
>
> Key: PHOENIX-4846
> URL: https://issues.apache.org/jira/browse/PHOENIX-4846
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Thomas D'Silva
>Priority: Critical
> Fix For: 5.2.1
>
> Attachments: PHOENIX-4846-wip.patch
>
>
> {{ExpressionComparabilityWrapper}} should set the sort order based on 
> {{childPart.getColumn()}} or else the attached test throws an 
> IllegalArgumentException
> {code}
> java.lang.IllegalArgumentException: 4 > 3
> at java.util.Arrays.copyOfRange(Arrays.java:3519)
> at 
> org.apache.hadoop.hbase.io.ImmutableBytesWritable.copyBytes(ImmutableBytesWritable.java:272)
> at 
> org.apache.phoenix.compile.WhereOptimizer.getTrailingRange(WhereOptimizer.java:329)
> at 
> org.apache.phoenix.compile.WhereOptimizer.clipRight(WhereOptimizer.java:350)
> at 
> org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:237)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:157)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:556)
> {code}
> Also in {{pushKeyExpressionsToScan()}} we cannot extract pk column nodes from 
> the where clause if the sort order of the columns changes. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5258:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> 
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Assignee: Prashant Vithani
>Priority: Minor
> Fix For: 5.3.0
>
> Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, 
> PHOENIX-5258-4.x-HBase-1.4.patch, PHOENIX-5258-master.001.patch, 
> PHOENIX-5258-master.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5648) Improve IndexScrutinyTool's performance by moving comparison logic to server side

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5648.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Won't Fix

Looks like the consensus earlier was that this wouldn't help much. If anyone 
wants to take this back up please feel free to reopen for a future post-5.2 
release. (Note though that the IndexTool is usually much more efficient than 
IndexScrutinyTool) 

> Improve IndexScrutinyTool's performance by moving comparison logic to server 
> side
> -
>
> Key: PHOENIX-5648
> URL: https://issues.apache.org/jira/browse/PHOENIX-5648
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
>
> If IndexScrutinyTool runs on a table with billion rows, it takes lots of 
> time. 
> One of the ways to improve the tool is to move the comparison to the 
> server-side. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5750:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Upsert on immutable table fails with AccessDeniedException
> --
>
> Key: PHOENIX-5750
> URL: https://issues.apache.org/jira/browse/PHOENIX-5750
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Attachments: PHOENIX-5750.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5750.4.x-HBase-1.3.v2.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {code:java}
> // code placeholder
> In TableDDLPermissionsIT
> @Test
> public void testUpsertIntoImmutableTable() throws Throwable {
> startNewMiniCluster();
> final String schema = "TEST_INDEX_VIEW";
> final String tableName = "TABLE_DDL_PERMISSION_IT";
> final String phoenixTableName = schema + "." + tableName;
> grantSystemTableAccess();
> try {
> superUser1.runAs(new PrivilegedExceptionAction() {
> @Override
> public Void run() throws Exception {
> try {
> verifyAllowed(createSchema(schema), superUser1);
> verifyAllowed(onlyCreateTable(phoenixTableName), 
> superUser1);
> } catch (Throwable e) {
> if (e instanceof Exception) {
> throw (Exception)e;
> } else {
> throw new Exception(e);
> }
> }
> return null;
> }
> });
> if (isNamespaceMapped) {
> grantPermissions(unprivilegedUser.getShortName(), schema, 
> Action.WRITE, Action.READ,Action.EXEC);
> }
> // we should be able to read the data from another index as well to 
> which we have not given any access to
> // this user
> verifyAllowed(upsertRowsIntoTable(phoenixTableName), 
> unprivilegedUser);
> } finally {
> revokeAll();
> }
> }
> in BasePermissionsIT:
> AccessTestAction onlyCreateTable(final String tableName) throws SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection(); Statement stmt = 
> conn.createStatement()) {
> assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName
> + "(pk INTEGER not null primary key, data VARCHAR, 
> val integer)"));
> }
> return null;
> }
> };
> }
> AccessTestAction upsertRowsIntoTable(final String tableName) throws 
> SQLException {
> return new AccessTestAction() {
> @Override
> public Object run() throws Exception {
> try (Connection conn = getConnection()) {
> try (PreparedStatement pstmt = conn.prepareStatement(
> "UPSERT INTO " + tableName + " values(?, ?, ?)")) {
> for (int i = 0; i < NUM_RECORDS; i++) {
> pstmt.setInt(1, i);
> pstmt.setString(2, Integer.toString(i));
> pstmt.setInt(3, i);
> assertEquals(1, pstmt.executeUpdate());
> }
> }
> conn.commit();
> }
> return null;
> }
> };
> }{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6740) Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile

2022-06-23 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6740:
-
Summary: Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 
profile  (was: Upgrade minimum supported Hadoop 3 version to 3.2.3)

> Upgrade default supported Hadoop 3 version to 3.2.3 for HBase 2.5 profile
> -
>
> Key: PHOENIX-6740
> URL: https://issues.apache.org/jira/browse/PHOENIX-6740
> Project: Phoenix
>  Issue Type: Task
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
>
> HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and 
> we have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6725:
-
Fix Version/s: 5.1.3

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:343)
>   
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Operation timed out   
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) 
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> at 

[jira] [Created] (PHOENIX-6740) Upgrade minimum supported Hadoop 3 version to 3.2.3

2022-06-22 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6740:


 Summary: Upgrade minimum supported Hadoop 3 version to 3.2.3
 Key: PHOENIX-6740
 URL: https://issues.apache.org/jira/browse/PHOENIX-6740
 Project: Phoenix
  Issue Type: Task
Reporter: Geoffrey Jacoby
Assignee: Geoffrey Jacoby
 Fix For: 5.2.0


HBase is upgrading the minimum supported Hadoop to 3.2.3 for HBase 2.5, and we 
have a similar request from dependabot. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6530) Fix tenantId generation for Sequential and Uniform load generators

2022-06-22 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6530.
--
Fix Version/s: 5.2.0
   (was: 4.17.0)
   Resolution: Fixed

Thanks for the patch, [~thrylokya24]

> Fix tenantId generation for Sequential and Uniform load generators
> --
>
> Key: PHOENIX-6530
> URL: https://issues.apache.org/jira/browse/PHOENIX-6530
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.17.0, 5.1.2
>Reporter: Jacob Isaac
>Assignee: thrylokya
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> While running the perf workloads for 4.16, found that tenantId generation for 
> the various generators do not match.
> As result the read queries fail when the writes/data was created using 
> different generator.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6725:
-
Fix Version/s: 5.2.0

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
> Fix For: 5.2.0
>
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:343)
>   
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Operation timed out   
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) 
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> at 

[jira] [Resolved] (PHOENIX-6725) ConcurrentMutationException when adding column to table/view

2022-06-21 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6725.
--
Resolution: Fixed

Merged to master. Thanks [~lokiore]

> ConcurrentMutationException when adding column to table/view
> 
>
> Key: PHOENIX-6725
> URL: https://issues.apache.org/jira/browse/PHOENIX-6725
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.1.0, 5.1.1, 4.16.0, 4.16.1, 5.1.2
>Reporter: Tanuj Khurana
>Assignee: Lokesh Khurana
>Priority: Major
>
> I have a single threaded workflow but occasionally I hit 
> ConcurrentMutationException error when adding column to table/view:
> Stack trace:
> {code:java}
>  2022-05-04 16:41:24,598 WARN  [main] 
> client.ConnectionManager$HConnectionImplementation: Checking master 
> connectioncom.google.protobuf.ServiceException: java.io.IOException: Call to 
> tkhurana-ltm.internal.salesforce.com:16000 failed on local exception: 
> java.io.IOException: Operation timed out
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:340)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$200(AbstractRpcClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:588)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1551)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2274)
> at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1823)
>   at 
> org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
> at 
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:141)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4552)
> at 
> org.apache.hadoop.hbase.client.HBaseAdmin.getTableDescriptor(HBaseAdmin.java:564)
> at org.apache.hadoop.hbase.client.HTable.getTableDescriptor(HTable.java:585)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.getTableDescriptor(ConnectionQueryServicesImpl.java:531)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.separateAndValidateProperties(ConnectionQueryServicesImpl.java:2769)
> at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:2298)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:4146)
> at 
> org.apache.phoenix.schema.MetaDataClient.addColumn(MetaDataClient.java:3772)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableAddColumnStatement$1.execute(PhoenixStatement.java:1487)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:414)
> at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:396)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:395)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:383)
> at 
> org.apache.phoenix.jdbc.PhoenixPreparedStatement.executeUpdate(PhoenixPreparedStatement.java:206)
> Caused by: java.io.IOException: Call to x failed on local exception: 
> java.io.IOException: Operation timed out 
> at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:180)
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:394)
>
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:415)
> 
> at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:411)
> 
> at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)   
> at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)   
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.closeConn(BlockingRpcConnection.java:685)
> 
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.readResponse(BlockingRpcConnection.java:651)
>  
> at 
> org.apache.hadoop.hbase.ipc.BlockingRpcConnection.run(BlockingRpcConnection.java:343)
>   
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.io.IOException: Operation timed out   
> at sun.nio.ch.FileDispatcherImpl.read0(Native Method) 
> at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) 
> at 

[jira] [Updated] (PHOENIX-5495) Avoid server-server RPCs when row locks are held inside MetaDataEndpointImpl

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5495?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5495:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Avoid server-server RPCs when row locks are held inside MetaDataEndpointImpl
> 
>
> Key: PHOENIX-5495
> URL: https://issues.apache.org/jira/browse/PHOENIX-5495
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.3.0
>
>
> At various spots in MetaDataEndpointImpl, we acquire row locks and then while 
> locks are held, do server-to-server RPC calls. This can lead to lock 
> starvation if RPCs take too long. We should decouple such interactions as 
> much as possible.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5338) Test the empty column

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5338.
--
Resolution: Duplicate

> Test the empty column
> -
>
> Key: PHOENIX-5338
> URL: https://issues.apache.org/jira/browse/PHOENIX-5338
> Project: Phoenix
>  Issue Type: Test
>Reporter: Kadir OZDEMIR
>Assignee: Jacob Isaac
>Priority: Major
>
> Every Phoenix table includes a shadow column that is called empty column. We 
> need an integration test to verify the following properties of the empty 
> column:
>  # Every Phoenix table (data or index) should have the empty column
>  # Every HBase mutation (full or partial row) for a Phoenix table should 
> include the empty column cell
>  # Removing/adding columns from/to a Phoenix table should not impact the 
> above empty column properties



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5338) Test the empty column

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5338:
-
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Test the empty column
> -
>
> Key: PHOENIX-5338
> URL: https://issues.apache.org/jira/browse/PHOENIX-5338
> Project: Phoenix
>  Issue Type: Test
>Reporter: Kadir OZDEMIR
>Assignee: Jacob Isaac
>Priority: Major
>
> Every Phoenix table includes a shadow column that is called empty column. We 
> need an integration test to verify the following properties of the empty 
> column:
>  # Every Phoenix table (data or index) should have the empty column
>  # Every HBase mutation (full or partial row) for a Phoenix table should 
> include the empty column cell
>  # Removing/adding columns from/to a Phoenix table should not impact the 
> above empty column properties



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5632) Add more information to SYSTEM.TASK TASK_DATA field apart from the task status

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5632:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add more information to SYSTEM.TASK TASK_DATA field apart from the task status
> --
>
> Key: PHOENIX-5632
> URL: https://issues.apache.org/jira/browse/PHOENIX-5632
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Minor
>  Labels: beginner, newbie
> Fix For: 5.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It would be helpful for debugging if we could add more information to the 
> TASK_DATA json that is upserted into SYSTEM.TASK apart from just the task 
> status. For example, in failures cases, perhaps we can add the stack trace 
> for the failing task.
>  
> Ideas:
>  * Stacktrace in case of error
>  * Time taken for task to complete
>  * Name(s) of deleted child view(s)/table(s) per task
>  * Task_type column is represented by int; may be useful to include task type 
> in task_data column



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6085) Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the upgrade path

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6085:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)

> Remove duplicate calls to getSysMutexPhysicalTableNameBytes() during the 
> upgrade path
> -
>
> Key: PHOENIX-6085
> URL: https://issues.apache.org/jira/browse/PHOENIX-6085
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Assignee: Richárd Antal
>Priority: Minor
>  Labels: phoenix-hardening, quality-improvement
> Fix For: 5.2.1
>
> Attachments: PHOENIX-6085.4.x.v1.patch, PHOENIX-6085.master.v1.patch
>
>
> We already make this call inside 
> [CQSI.acquireUpgradeMutex()|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4220]
>  and then call writeMutexCell() which calls this again 
> [here|https://github.com/apache/phoenix/blob/1922895dfe5960dc025709b04acfaf974d3959dc/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L4244].
>  
> We should move this to inside writeMutexCell() itself and throw 
> UpgradeInProgressException if required there to avoid unnecessary expensive 
> HBase admin API calls.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5574) Disallow creating index when index.region.observer.enabled flag is false and base table is loaded with IRO coproc

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5574.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Won't Fix

Unfortunately, this requires knowledge of the TableDescriptor, which requires a 
roundtrip to meta (either on the client or MetadataEndpoint server side), so 
this check wouldn't be efficient. 

> Disallow creating index when index.region.observer.enabled flag is false and 
> base table is loaded with IRO coproc
> -
>
> Key: PHOENIX-5574
> URL: https://issues.apache.org/jira/browse/PHOENIX-5574
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
>
> Disallow creating index when index.region.observer.enabled flag is false and 
> base table is loaded with IRO coproc



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6492) Validate SQL with Minicluster before Synthesizing with SchemaTool

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6492:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)

> Validate SQL with Minicluster before Synthesizing with SchemaTool
> -
>
> Key: PHOENIX-6492
> URL: https://issues.apache.org/jira/browse/PHOENIX-6492
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 5.3.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6732) PherfMainIT and DataIngestIT have failing tests

2022-06-20 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6732.
--
Resolution: Duplicate

> PherfMainIT and DataIngestIT have failing tests
> ---
>
> Key: PHOENIX-6732
> URL: https://issues.apache.org/jira/browse/PHOENIX-6732
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 5.2.0
>
>
> PherfMainIT and DriverIngestIT have consistently failing IT tests, which can 
> be reproduced both locally and in Yetus. (This was shown recently in the test 
> run for PHOENIX-6554, which is a pherf improvement.)
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 69.393 s <<< FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> [ERROR] org.apache.phoenix.pherf.DataIngestIT.testColumnRulesApplied  Time 
> elapsed: 0.369 s  <<< FAILURE!
> java.lang.AssertionError: Expected 100 rows to have been inserted 
> expected:<30> but was:<31>
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testQueryTimeout  Time elapsed: 
> 15.531 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-32_detail.csv (No such file or 
> directory)
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testNoQueryTimeout  Time 
> elapsed: 9.339 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-23_detail.csv (No such file or 
> directory)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (PHOENIX-6733) Ref count leaked test failures

2022-06-15 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6733:


 Summary: Ref count leaked test failures
 Key: PHOENIX-6733
 URL: https://issues.apache.org/jira/browse/PHOENIX-6733
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0
Reporter: Geoffrey Jacoby
 Fix For: 5.2.0


In pretty much every recent Yetus test run, some tests have flapped in the 
AfterClass teardown logic which tries to check for HBase Store reference 
resource leaks. The error message is "Ref count leaked", and some common suites 
this happens to are:
DateTimeIT
InListIT
SequenceIT
IndexToolForDeleteBeforeRebuildIT
SpooledTmpFileDeleteIT

I haven't had much luck trying to reproduce this locally. It's also not clear 
yet whether the root cause is an HBase error or a Phoenix one. (And if it's a 
Phoenix one, is the bug with something in Phoenix or with the resource check?) 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-6732) PherfMainIT and DataIngestIT have failing tests

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-6732:
-
Summary: PherfMainIT and DataIngestIT have failing tests  (was: PherfMainIT 
and DriverIngestIT have failing tests)

> PherfMainIT and DataIngestIT have failing tests
> ---
>
> Key: PHOENIX-6732
> URL: https://issues.apache.org/jira/browse/PHOENIX-6732
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.2.0
>Reporter: Geoffrey Jacoby
>Assignee: Jacob Isaac
>Priority: Blocker
> Fix For: 5.2.0
>
>
> PherfMainIT and DriverIngestIT have consistently failing IT tests, which can 
> be reproduced both locally and in Yetus. (This was shown recently in the test 
> run for PHOENIX-6554, which is a pherf improvement.)
> [ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 69.393 s <<< FAILURE! - in org.apache.phoenix.pherf.DataIngestIT
> [ERROR] org.apache.phoenix.pherf.DataIngestIT.testColumnRulesApplied  Time 
> elapsed: 0.369 s  <<< FAILURE!
> java.lang.AssertionError: Expected 100 rows to have been inserted 
> expected:<30> but was:<31>
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testQueryTimeout  Time elapsed: 
> 15.531 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-32_detail.csv (No such file or 
> directory)
> [ERROR] org.apache.phoenix.pherf.PherfMainIT.testNoQueryTimeout  Time 
> elapsed: 9.339 s  <<< ERROR!
> java.io.FileNotFoundException: 
> /tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-23_detail.csv (No such file or 
> directory)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Created] (PHOENIX-6732) PherfMainIT and DriverIngestIT have failing tests

2022-06-15 Thread Geoffrey Jacoby (Jira)
Geoffrey Jacoby created PHOENIX-6732:


 Summary: PherfMainIT and DriverIngestIT have failing tests
 Key: PHOENIX-6732
 URL: https://issues.apache.org/jira/browse/PHOENIX-6732
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.2.0
Reporter: Geoffrey Jacoby
Assignee: Jacob Isaac
 Fix For: 5.2.0


PherfMainIT and DriverIngestIT have consistently failing IT tests, which can be 
reproduced both locally and in Yetus. (This was shown recently in the test run 
for PHOENIX-6554, which is a pherf improvement.)

[ERROR] Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 69.393 
s <<< FAILURE! - in org.apache.phoenix.pherf.DataIngestIT

[ERROR] org.apache.phoenix.pherf.DataIngestIT.testColumnRulesApplied  Time 
elapsed: 0.369 s  <<< FAILURE!
java.lang.AssertionError: Expected 100 rows to have been inserted expected:<30> 
but was:<31>

[ERROR] org.apache.phoenix.pherf.PherfMainIT.testQueryTimeout  Time elapsed: 
15.531 s  <<< ERROR!
java.io.FileNotFoundException: 
/tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-32_detail.csv (No such file or 
directory)

[ERROR] org.apache.phoenix.pherf.PherfMainIT.testNoQueryTimeout  Time elapsed: 
9.339 s  <<< ERROR!
java.io.FileNotFoundException: 
/tmp/RESULTS/RESULT_COMBINED_2022-06-15_05-12-23_detail.csv (No such file or 
directory)




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6554) Pherf CLI option long/short option names do not follow conventions

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6554.
--
Fix Version/s: 5.2.0
 Release Note: Pherf CLI now allows its long option names to use -- in 
addition to -, as the Unix convention suggests. While the existing behavior of 
using a single dash "-" for long options doesn't follow the Unix convention, we 
still allow it for backward compatibility. 
   Resolution: Fixed

Merged to master. Thanks [~ankurmahe]!

> Pherf CLI option long/short option names do not follow conventions
> --
>
> Key: PHOENIX-6554
> URL: https://issues.apache.org/jira/browse/PHOENIX-6554
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Assignee: Ankur Maheshwari
>Priority: Minor
> Fix For: 5.2.0
>
>
> The Pherf script does not use long and short option names consistently.
> for example:
> -t and --thin are for specifying the the thin PQS URL, 
> and 
> -z and --zookeeper are for the ZK quorum
> but 
> -schemaFile is used to specify the schema file, and 
> --schemaFile does not work.
> IMO options that look like long options should also be accepted with double 
> dash, or we could just invent new short options  for them (which would break 
> backwards compatibility).
> i.e. instead of 
> {code:java}
> options.addOption("schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> {code}
> we could have one of the following:
> {code:java}
> options.addOption("sf", "schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> options.addOption("schemaFile", "schemaFile", true,
> "Regex or file name for the Test phoenix table schema .sql to 
> use.");
> {code}



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6341) Enable running IT tests from PHERF module during builds and patch checkins

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6341.
--
Resolution: Cannot Reproduce

> Enable running IT tests from PHERF module during builds and patch checkins
> --
>
> Key: PHOENIX-6341
> URL: https://issues.apache.org/jira/browse/PHOENIX-6341
> Project: Phoenix
>  Issue Type: Test
>Reporter: Jacob Isaac
>Assignee: Kiran Kumar Maturi
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5587) Update documentation for secondary indexes

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5587.
--
Fix Version/s: 5.1.3
   (was: 4.17.0)
   (was: 4.16.2)
 Assignee: Kadir Ozdemir
   Resolution: Implemented

Checked the website and the documentation has been updated -- I think Kadir did 
this. 

> Update documentation for secondary indexes
> --
>
> Key: PHOENIX-5587
> URL: https://issues.apache.org/jira/browse/PHOENIX-5587
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.1.0, 4.14.3
>Reporter: Geoffrey Jacoby
>Assignee: Kadir Ozdemir
>Priority: Major
> Fix For: 5.2.0, 5.1.3
>
>
> Phoenix 4.14.3 and 4.15 (and the forthcoming 5.1) have a major revamp of the 
> secondary index framework, which requires manual upgrade steps on the part of 
> operators. These need to be documented in the Phoenix website docs. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5649) IndexScrutinyTool is very slow on view-indexes

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5649.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Not A Bug

This was because of the particular schemas being used at the time. 

> IndexScrutinyTool is very slow on view-indexes
> --
>
> Key: PHOENIX-5649
> URL: https://issues.apache.org/jira/browse/PHOENIX-5649
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.3
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
>
> From view-index to view, it scrutinizes about 7 rows per minute with batch 
> size 1. 
> From view to view-index, it is about 1000 rows per minute with batch size 1, 
> which is also very slow. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5518) Unittests for global index read repair count

2022-06-15 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5518:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Unittests for global index read repair count
> 
>
> Key: PHOENIX-5518
> URL: https://issues.apache.org/jira/browse/PHOENIX-5518
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 5.2.1
>
>
> [~kadir] and I were tracking down a scenario where the read repair kept 
> increasing.
> It turned out not to be a bug, but we realized that there is not test that 
> checks whether the read repair count is as expected as correctness is 
> guaranteed in any case.
> So let's add a test case based on the read repairs metric we added some time 
> back.
> I will not have time to work in, just filing in case somebody has.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


  1   2   3   4   5   6   7   8   >