[jira] [Updated] (PHOENIX-5586) Add documentation for Splittable SYSTEM.CATALOG

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5586:
-
Fix Version/s: (was: 4.17.0)

> Add documentation for Splittable SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-5586
> URL: https://issues.apache.org/jira/browse/PHOENIX-5586
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.2.0
>
>
> There are many changes after PHOENIX-3534 especially for backwards 
> compatibility. There are additional configurations such as 
> "phoenix.allow.system.catalog.rollback" which allows rollback of splittable 
> SYSTEM.CATALOG, etc. We should document these changes.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5586) Add documentation for Splittable SYSTEM.CATALOG

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5586:
-
Priority: Major  (was: Blocker)

> Add documentation for Splittable SYSTEM.CATALOG
> ---
>
> Key: PHOENIX-5586
> URL: https://issues.apache.org/jira/browse/PHOENIX-5586
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.17.0, 5.2.0
>
>
> There are many changes after PHOENIX-3534 especially for backwards 
> compatibility. There are additional configurations such as 
> "phoenix.allow.system.catalog.rollback" which allows rollback of splittable 
> SYSTEM.CATALOG, etc. We should document these changes.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5682) IndexTool can just update empty_column with verified if rest of index row matches

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5682.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Won't Do

We use the rewritten index row causing all timestamps to match as a useful 
property. 

> IndexTool can just update empty_column with verified if rest of index row 
> matches
> -
>
> Key: PHOENIX-5682
> URL: https://issues.apache.org/jira/browse/PHOENIX-5682
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.1, 4.14.3
>Reporter: Priyank Porwal
>Priority: Minor
>
> When upgrading from old indexing design to new consistent indexing, 
> IndexUpgradeTool kicks off IndexTool to rebuild the index. This index rebuild 
> rewrites all index rows. If any index row was already consistent, it is 
> rewritten + empty_column is updated with verified flag. 
> IndexTool could potentially just update empty_column if rest of the index row 
> matches with data rows. This would save the massive writes to the underlying 
> dfs, as well as other side effects of these writes to replication.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5497) When dropping a view, use the PTable for generating delete mutations for links rather than scanning SYSTEM.CATALOG

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5497:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> When dropping a view, use the PTable for generating delete mutations for 
> links rather than scanning SYSTEM.CATALOG
> --
>
> Key: PHOENIX-5497
> URL: https://issues.apache.org/jira/browse/PHOENIX-5497
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.2.1
>
>
> When dropping a view, we should generate the delete markers for the 
> parent->child links using the view and parent's PTable rather than by issuing 
> a scan on SYSTEM.CATALOG (see 
> [this|https://github.com/apache/phoenix/blob/207ab526ee511a19ac287f61fbd2cef268c5038d/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L2310]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5498) When dropping a view, send delete mutations for parent->child links from client to server rather than doing server-server RPCs

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5498:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> When dropping a view, send delete mutations for parent->child links from 
> client to server rather than doing server-server RPCs
> --
>
> Key: PHOENIX-5498
> URL: https://issues.apache.org/jira/browse/PHOENIX-5498
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 5.2.1
>
>
> Once we are able to generate delete mutations using the child view and parent 
> PTable, we should send the mutations directly from the client to the endpoint 
> coprocessor on SYSTEM.CHILD_LINK rather than doing a server-server RPC from 
> the SYSTEM.CATALOG region to the SYSTEM.CHILD_LINK region.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-4868) Create a column attribute IS_EXCLUDED to denote a dropped derived column and remove LinkType.EXCLUDED_COLUMN

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-4868.
--
Fix Version/s: (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)
   Resolution: Won't Do

This wasn't done as part of the splittable syscat project, which is pretty much 
complete at this point. If anyone wants to take this up, please feel free, but 
in the meantime, there doesn't seem to be any demand for it. 

> Create a column attribute IS_EXCLUDED to denote a dropped derived column and 
> remove LinkType.EXCLUDED_COLUMN
> 
>
> Key: PHOENIX-4868
> URL: https://issues.apache.org/jira/browse/PHOENIX-4868
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Thomas D'Silva
>Priority: Minor
>




--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5170) Update meta timestamp of parent table when dropping index

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5170.
--
Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)
   Resolution: Cannot Reproduce

Closing this issue since the entire indexing framework has been mostly 
rewritten since it was first filed. If the problem with flume can be recreated 
with the current indexes and phoenix-flume version, feel free to re-open or 
file a new JIRA. 

> Update meta timestamp of parent table when dropping index
> -
>
> Key: PHOENIX-5170
> URL: https://issues.apache.org/jira/browse/PHOENIX-5170
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: gabry
>Priority: Major
>  Labels: phoenix
> Fix For: 5.2.0
>
> Attachments: updateParentTableMetaWhenDroppingIndex.patch
>
>
> I have a flume client ,which inserting values to phoenix table with an index 
> named idx_abc.
> When the idx_abc dropped , flume logs WARN message for ever as flows 
> 28 Feb 2019 10:25:55,774 WARN  [hconnection-0x6fb2e162-shared--pool1-t883] 
> (org.apache.hadoop.hbase.client.AsyncProcess$AsyncRequestFutureImpl.logNoResubmit:1263)
>   - #1, table=PHOENIX:TABLE_ABC, attempt=1/3 failed=6ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed.  disableIndexOnFailure=true, Failed to write to multiple 
> index tables: [PHOENIX:IDX_ABC] ,serverTimestamp=1551320754540,
> at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:163)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:623)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:583)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:566)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1034)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1705)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1030)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3394)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2944)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:2886)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:753)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:715)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2129)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33656)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2191)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed.  disableIndexOnFailure=true, Failed to write to multiple index 
> tables: [PHOENIX:IDX_ABC]
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:162)
> ... 21 more
> Caused by: 
> org.apache.phoenix.hbase.index.exception.MultiIndexWriteFailureException:  
> disableIndexOnFailure=true, Failed to write to multiple index tables: 
> [PHOENIX:IDX_ABC]
> at 
> 

[jira] [Updated] (PHOENIX-5117) Return the count of rows scanned in HBase

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5117:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Return the count of rows scanned in HBase
> -
>
> Key: PHOENIX-5117
> URL: https://issues.apache.org/jira/browse/PHOENIX-5117
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.14.1
>Reporter: Chen Feng
>Assignee: Chen Feng
>Priority: Minor
> Fix For: 5.3.0
>
> Attachments: PHOENIX-5117-4.x-HBase-1.4-v1.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v2.patch, PHOENIX-5117-4.x-HBase-1.4-v3.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v4.patch, PHOENIX-5117-4.x-HBase-1.4-v5.patch, 
> PHOENIX-5117-4.x-HBase-1.4-v6.patch, PHOENIX-5117-v1.patch
>
>
> HBASE-5980 provides the ability to return the number of rows scanned. Such 
> metrics should also be returned by Phoenix.
> HBASE-21815 is acquired.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5653) Documentation updates for Update Cache Frequency

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5653:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Documentation updates for Update Cache Frequency
> 
>
> Key: PHOENIX-5653
> URL: https://issues.apache.org/jira/browse/PHOENIX-5653
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Nitesh Maheshwari
>Assignee: Nitesh Maheshwari
>Priority: Major
>  Labels: Documentation
> Fix For: 5.2.1
>
>
> The documentation for the config parameter 
> phoenix.default.update.cache.frequency is not available on the Configuration 
> page on the website. Also, the existing documentation for 'Update Cache 
> Frequency' should be updated on the Grammar page on the website. Specifically 
> the precedence order that it follows should be mentioned i.e.
> Table-level property > Connection-level property > Default value.
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5274) ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated should use HBase APIs that do not require ADMIN permissions for existence checks

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5274:
-
Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)

> ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated 
> should use HBase APIs that do not require ADMIN permissions for existence 
> checks
> -
>
> Key: PHOENIX-5274
> URL: https://issues.apache.org/jira/browse/PHOENIX-5274
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.2
>Reporter: Chinmay Kulkarni
>Assignee: Ankit Jain
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: PHOENIX-5274.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5274.4.x-HBase-1.5.v2.patch, PHOENIX-5274.4.x-HBase-1.5.v3.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [HBASE-22377|https://issues.apache.org/jira/browse/HBASE-22377] will 
> introduce a new API that does not require ADMIN permissions to check the 
> existence of a namespace.
> Currently, CQSI#ensureNamespaceCreated calls 
> HBaseAdmin#getNamespaceDescriptor which eventually on the server causes a 
> call to AccessController#preGetNamespaceDescriptor. This tries to acquire 
> ADMIN permissions on the namespace. We should ideally use the new API 
> provided by HBASE-22377 which does not require the phoenix client to get 
> ADMIN permissions on the namespace. We should acquire ADMIN permissions only 
> in case we need to create the namespace if it doesn't already exist.
> Similarly, CQSI#ensureTableCreated should first check the existence of a 
> table before trying to do HBaseAdmin#getTableDescriptor since this requires 
> CREATE and ADMIN permissions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-5274) ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated should use HBase APIs that do not require ADMIN permissions for existence checks

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-5274:


Assignee: Geoffrey Jacoby  (was: Ankit Jain)

> ConnectionQueryServiceImpl#ensureNamespaceCreated and ensureTableCreated 
> should use HBase APIs that do not require ADMIN permissions for existence 
> checks
> -
>
> Key: PHOENIX-5274
> URL: https://issues.apache.org/jira/browse/PHOENIX-5274
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0, 4.14.2
>Reporter: Chinmay Kulkarni
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.0
>
> Attachments: PHOENIX-5274.4.x-HBase-1.5.v1.patch, 
> PHOENIX-5274.4.x-HBase-1.5.v2.patch, PHOENIX-5274.4.x-HBase-1.5.v3.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> [HBASE-22377|https://issues.apache.org/jira/browse/HBASE-22377] will 
> introduce a new API that does not require ADMIN permissions to check the 
> existence of a namespace.
> Currently, CQSI#ensureNamespaceCreated calls 
> HBaseAdmin#getNamespaceDescriptor which eventually on the server causes a 
> call to AccessController#preGetNamespaceDescriptor. This tries to acquire 
> ADMIN permissions on the namespace. We should ideally use the new API 
> provided by HBASE-22377 which does not require the phoenix client to get 
> ADMIN permissions on the namespace. We should acquire ADMIN permissions only 
> in case we need to create the namespace if it doesn't already exist.
> Similarly, CQSI#ensureTableCreated should first check the existence of a 
> table before trying to do HBaseAdmin#getTableDescriptor since this requires 
> CREATE and ADMIN permissions.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5322) Upsert on a view of an indexed table fails with ArrayIndexOutOfBound Exception

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5322:
-
Fix Version/s: 5.2.1

> Upsert on a view of an indexed table fails with ArrayIndexOutOfBound Exception
> --
>
> Key: PHOENIX-5322
> URL: https://issues.apache.org/jira/browse/PHOENIX-5322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.2
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 5.2.1
>
>
> {code:java}
> // code placeholder
> public void testUpstertOnViewWithIndexedTable() throws SQLException {
>Properties prop = new Properties();
>Connection conn = DriverManager.getConnection(getUrl(), prop);
>conn.setAutoCommit(true);
>conn.createStatement().execute("CREATE TABLE IF NOT EXISTS us_population 
> (\n" +
>  "  state CHAR(2) NOT NULL,\n" +
>  "  city VARCHAR NOT NULL,\n" +
>  "  population BIGINT,\n" +
>  "  CONSTRAINT my_pk PRIMARY KEY (state, city)) 
> COLUMN_ENCODED_BYTES=0");
>PreparedStatement ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('NY','New York',8143197)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','Los 
> Angeles',3844829)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('IL','Chicago',2842518)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('TX','Houston',2016582)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('PA','Philadelphia',1463281)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('AZ','Phoenix',1461575)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('TX','San 
> Antonio',1256509)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
> Diego',1255540)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population 
> VALUES('TX','Dallas',1213825)");
>ps.executeUpdate();
>ps = conn.prepareStatement("UPSERT INTO us_population VALUES('CA','San 
> Jose',912332)");
>ps.executeUpdate();
>conn.createStatement().execute("CREATE VIEW IF NOT EXISTS 
> us_population_gv" +
>  "(city_area INTEGER, avg_fam_size INTEGER) AS " +
>  "SELECT * FROM us_population WHERE state = 'CA'");
>conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
> us_population_gv_gi ON " +
>  "us_population_gv (city_area) INCLUDE (population)");
>conn.createStatement().execute("CREATE INDEX IF NOT EXISTS 
> us_population_gi ON " +
>  "us_population (population)");
>ps = conn.prepareStatement("UPSERT INTO 
> us_population_gv(state,city,population,city_area,avg_fam_size) " +
> "VALUES('CA','Santa Barbara',912332,1300,4)");
>ps.executeUpdate();
> }
> {code}
> Exception: 
> java.lang.ArrayIndexOutOfBoundsException: -1
>   at java.util.ArrayList.elementData(ArrayList.java:422)
>   at java.util.ArrayList.get(ArrayList.java:435)
>   at 
> org.apache.phoenix.index.IndexMaintainer.initCachedState(IndexMaintainer.java:1631)
>   at 
> org.apache.phoenix.index.IndexMaintainer.(IndexMaintainer.java:564)
>   at 
> org.apache.phoenix.index.IndexMaintainer.create(IndexMaintainer.java:144)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainer(PTableImpl.java:1499)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:226)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serializeServerMaintainedIndexes(IndexMaintainer.java:203)
>   at 
> org.apache.phoenix.index.IndexMaintainer.serialize(IndexMaintainer.java:187)
>   at 
> org.apache.phoenix.schema.PTableImpl.getIndexMaintainers(PTableImpl.java:1511)
>   at org.apache.phoenix.execute.MutationState.send(MutationState.java:963)
>   at 
> org.apache.phoenix.execute.MutationState.send(MutationState.java:1432)
>   at 
> org.apache.phoenix.execute.MutationState.commit(MutationState.java:1255)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:673)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection$3.call(PhoenixConnection.java:669)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixConnection.commit(PhoenixConnection.java:669)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:412)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:392)
>   at 

[jira] [Resolved] (PHOENIX-5020) PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5020.
--
Resolution: Won't Fix

> PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs
> ---
>
> Key: PHOENIX-5020
> URL: https://issues.apache.org/jira/browse/PHOENIX-5020
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
>  Labels: SFDC
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> If an environment has a huge System.Catalog (such as having many views) the 
> query in getCandidateJobs can timeout. Because of PHOENIX-4936, this looks 
> like there are no indexes that need an async rebuild. In addition to fixing 
> PHOENIX-4936, we should extend the timeout. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-4216) Figure out why tests randomly fail with master not able to initialize in 200 seconds

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-4216.
--
Fix Version/s: (was: 4.17.0)
   (was: 4.16.2)
   Resolution: Done

Associated HBase JIRA has been worked, and I don't think this error happens 
anymore

> Figure out why tests randomly fail with master not able to initialize in 200 
> seconds
> 
>
> Key: PHOENIX-4216
> URL: https://issues.apache.org/jira/browse/PHOENIX-4216
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0, 4.14.3
>Reporter: Samarth Jain
>Priority: Major
>  Labels: phoenix-hardening, precommit, quality-improvement
> Fix For: 5.2.0
>
> Attachments: Precommit-3849.log
>
>
> Sample failure:
>  [https://builds.apache.org/job/PreCommit-PHOENIX-Build/1450//testReport/]
> [~apurtell] - Looking at the thread dump in the above link, do you see why 
> master startup failed? I couldn't see any obvious deadlocks
>  
> Exception stacktrace:
> org.apache.hadoop.hbase.regionserver.HRegionServer(2414): Master rejected 
> startup because clock is out of 
> syncorg.apache.hadoop.hbase.regionserver.HRegionServer(2414): Master rejected 
> startup because clock is out of 
> syncorg.apache.hadoop.hbase.ClockOutOfSyncException: 
> org.apache.hadoop.hbase.ClockOutOfSyncException: Server 
> 2a3b1691db3a,42899,1590685404919 has been rejected; Reported time is too far 
> out of sync with master.  Time difference of 1590685396313ms > max allowed of 
> 3ms at 
> org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:411)
>  at 
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:277)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:368)
>  at 
> org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$2.callBlockingMethod(RegionServerStatusProtos.java:8615)
>  at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2417) at 
> org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:186) at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:166)
>  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.instantiateException(RemoteWithExtrasException.java:95)
>  at 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException.unwrapRemoteException(RemoteWithExtrasException.java:85)
>  at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.makeIOExceptionOfException(ProtobufUtil.java:372)
>  at 
> org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:331)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.reportForDuty(HRegionServer.java:2412)
>  at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:960)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:158)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
>  at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:142)
>  at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:360) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1744)
>  at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:334) 
> at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:139)
>  at java.lang.Thread.run(Thread.java:748)Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ClockOutOfSyncException):
>  org.apache.hadoop.hbase.ClockOutOfSyncException: Server 
> 2a3b1691db3a,42899,1590685404919 has been rejected; Reported time is too far 
> out of sync with master.  Time difference of 1590685396313ms > max allowed of 
> 3ms at 
> org.apache.hadoop.hbase.master.ServerManager.checkClockSkew(ServerManager.java:411)
>  at 
> org.apache.hadoop.hbase.master.ServerManager.regionServerStartup(ServerManager.java:277)
>  at 
> org.apache.hadoop.hbase.master.MasterRpcServices.regionServerStartup(MasterRpcServices.java:368)
>  at 
> 

[jira] [Updated] (PHOENIX-4195) PHOENIX-4195 Deleting view rows with extended PKs through the base table silently fails

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-4195:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> PHOENIX-4195 Deleting view rows with extended PKs through the base table 
> silently fails
> ---
>
> Key: PHOENIX-4195
> URL: https://issues.apache.org/jira/browse/PHOENIX-4195
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.2.1
>
> Attachments: test.diff
>
>
> The attached test fails.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-3165) System table integrity check and repair tool

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-3165:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> System table integrity check and repair tool
> 
>
> Key: PHOENIX-3165
> URL: https://issues.apache.org/jira/browse/PHOENIX-3165
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Kyle Purtell
>Assignee: Lokesh Khurana
>Priority: Critical
>  Labels: phoenix-hardening
> Fix For: 5.2.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the Phoenix system tables become corrupt recovery is a painstaking 
> process of low level examination of table contents and manipulation of same 
> with the HBase shell. This is very difficult work providing no margin of 
> safety, and is a critical gap in terms of usability.
> At the OS level, we have fsck.
> At the HDFS level, we have fsck (integrity checking only, though)
> At the HBase level, we have hbck. 
> At the Phoenix level, we lack a system table repair tool. 
> Implement a tool that:
> - Does not depend on the Phoenix client.
> - Supports integrity checking of SYSTEM tables. Check for the existence of 
> all required columns in entries. Check that entries exist for all Phoenix 
> managed tables (implies Phoenix should add supporting advisory-only metadata 
> to the HBase table schemas). Check that serializations are valid. 
> - Supports complete repair of SYSTEM.CATALOG and recreation, if necessary, of 
> other tables like SYSTEM.STATS which can be dropped to recover from an 
> emergency. We should be able to drop SYSTEM.CATALOG (or any other SYSTEM 
> table), run the tool, and have a completely correct recreation of 
> SYSTEM.CATALOG available at the end of its execution.
> - To the extent we have or introduce cross-system-table invariants, check 
> them and offer a repair or reconstruction option.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Assigned] (PHOENIX-3165) System table integrity check and repair tool

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby reassigned PHOENIX-3165:


Assignee: Lokesh Khurana  (was: Xinyi Yan)

> System table integrity check and repair tool
> 
>
> Key: PHOENIX-3165
> URL: https://issues.apache.org/jira/browse/PHOENIX-3165
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Andrew Kyle Purtell
>Assignee: Lokesh Khurana
>Priority: Critical
>  Labels: phoenix-hardening
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When the Phoenix system tables become corrupt recovery is a painstaking 
> process of low level examination of table contents and manipulation of same 
> with the HBase shell. This is very difficult work providing no margin of 
> safety, and is a critical gap in terms of usability.
> At the OS level, we have fsck.
> At the HDFS level, we have fsck (integrity checking only, though)
> At the HBase level, we have hbck. 
> At the Phoenix level, we lack a system table repair tool. 
> Implement a tool that:
> - Does not depend on the Phoenix client.
> - Supports integrity checking of SYSTEM tables. Check for the existence of 
> all required columns in entries. Check that entries exist for all Phoenix 
> managed tables (implies Phoenix should add supporting advisory-only metadata 
> to the HBase table schemas). Check that serializations are valid. 
> - Supports complete repair of SYSTEM.CATALOG and recreation, if necessary, of 
> other tables like SYSTEM.STATS which can be dropped to recover from an 
> emergency. We should be able to drop SYSTEM.CATALOG (or any other SYSTEM 
> table), run the tool, and have a completely correct recreation of 
> SYSTEM.CATALOG available at the end of its execution.
> - To the extent we have or introduce cross-system-table invariants, check 
> them and offer a repair or reconstruction option.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-3817) VerifyReplication using SQL

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-3817:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> VerifyReplication using SQL
> ---
>
> Key: PHOENIX-3817
> URL: https://issues.apache.org/jira/browse/PHOENIX-3817
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Akshita Malhotra
>Priority: Minor
> Fix For: 5.3.0
>
> Attachments: PHOENIX-3817-final.patch, PHOENIX-3817-final2.patch, 
> PHOENIX-3817.v1.patch, PHOENIX-3817.v2.patch, PHOENIX-3817.v3.patch, 
> PHOENIX-3817.v4.patch, PHOENIX-3817.v5.patch, PHOENIX-3817.v6.patch, 
> PHOENIX-3817.v7.patch
>
>
> Certain use cases may copy or replicate a subset of a table to a different 
> table or cluster. For example, application topologies may map data for 
> specific tenants to different peer clusters.
> It would be useful to have a Phoenix VerifyReplication tool that accepts an 
> SQL query, a target table, and an optional target cluster. The tool would 
> compare data returned by the query on the different tables and update various 
> result counters (similar to HBase's VerifyReplication).



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6412) Consider batching uncovered column merge for local indexes

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6412.
--
Resolution: Duplicate

> Consider batching uncovered column merge for local indexes
> --
>
> Key: PHOENIX-6412
> URL: https://issues.apache.org/jira/browse/PHOENIX-6412
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>Priority: Minor
> Fix For: 5.2.0
>
> Attachments: 6412-hack.txt
>
>
> Currently uncovered columns are merged row-by-row, performing a Get to the 
> data region for each matching row in the index region.
> Each Get needs to seek all the store scanners, and doing this per row is 
> quite expensive.
> Instead we could batch inside the RegionScannerFactory.getWrappedScanner() -> 
> RegionScanner.nextRaw() method. Collect N index rows and then execute a 
> single skip scan on the data region. 
> I might be able to get to that, but there's someone who is interested in 
> taking this up I would not mind :)



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5685) PDataTypeFactory Singleton is not thread safe

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5685:
-
Fix Version/s: 5.2.1
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> PDataTypeFactory Singleton is not thread safe
> -
>
> Key: PHOENIX-5685
> URL: https://issues.apache.org/jira/browse/PHOENIX-5685
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 5.2.1
>
>
> The singleton class uses lazy initialization of the INSTANCE variable, 
> however, the PDataTypeFactory#getInstance method is not synchronized to be 
> thread-safe. Good to make use of double-checked locking. 



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (PHOENIX-5803) Add unit testing for classes changed in PHOENIX-5801 and PHOENIX-5802

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5803:
-
Fix Version/s: 5.3.0
   (was: 4.17.0)
   (was: 5.2.0)
   (was: 4.16.2)

> Add unit testing for classes changed in PHOENIX-5801 and PHOENIX-5802
> -
>
> Key: PHOENIX-5803
> URL: https://issues.apache.org/jira/browse/PHOENIX-5803
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Chinmay Kulkarni
>Priority: Major
>  Labels: phoenix-hardening
> Fix For: 5.3.0
>
>
> WhereConstantParser should be in the util package rather than coprocessor.
> We should also refactor, remove anonymous classes, etc. in 
> BaseResultIterators, MutatingResultIteratorFactory, UpsertCompiler, etc.
> Also need to add unit tests for all these classes.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6404) Support Hadoop 2 in master

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6404.
--
Resolution: Won't Do

> Support Hadoop 2 in master
> --
>
> Key: PHOENIX-6404
> URL: https://issues.apache.org/jira/browse/PHOENIX-6404
> Project: Phoenix
>  Issue Type: Wish
>  Components: core
>Affects Versions: 5.1.0
>Reporter: Istvan Toth
>Priority: Major
> Fix For: 5.2.0
>
>
> As discussed on the the dev list, being able to build and use Phoenix on 
> HBase 2.x on Hadoop 2.x would be a desirable.
> This probably would require adding additional maven profiles to handle the 
> different Hadoop versions during the build and test phases, modelled after 
> the HBase hadoop profiles.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-5400) Table name while selecting index state is case sensitive

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-5400.
--
Resolution: Not A Bug

> Table name while selecting index state is case sensitive
> 
>
> Key: PHOENIX-5400
> URL: https://issues.apache.org/jira/browse/PHOENIX-5400
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.2
>Reporter: Ashutosh Parekh
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.17.0, 5.2.0, 4.16.2
>
>
> Initially, the following query is executed:
>  
> {code:java}
> CREATE TABLE IF NOT EXISTS us_population (
>  state CHAR(2) NOT NULL,
>  city VARCHAR NOT NULL,
>  population BIGINT,
>  CONSTRAINT my_pk PRIMARY KEY (state, city)) COLUMN_ENCODED_BYTES=0;
> UPSERT INTO us_population VALUES('NY','New York',8143197);
> UPSERT INTO us_population VALUES('CA','Los Angeles',3844829);
> UPSERT INTO us_population VALUES('IL','Chicago',2842518);
> UPSERT INTO us_population VALUES('TX','Houston',2016582);
> UPSERT INTO us_population VALUES('PA','Philadelphia',1463281);
> UPSERT INTO us_population VALUES('AZ','Phoenix',1461575);
> UPSERT INTO us_population VALUES('TX','San Antonio',1256509);
> UPSERT INTO us_population VALUES('CA','San Diego',1255540);
> UPSERT INTO us_population VALUES('TX','Dallas',1213825);
> UPSERT INTO us_population VALUES('CA','San Jose',912332);
> CREATE VIEW us_population_global_view (name VARCHAR,
>  age BIGINT) AS
> SELECT * FROM us_population
> WHERE state = 'CA';
> CREATE INDEX us_population_gv_gi_1 ON us_population_global_view(age) include 
> (city) async;
> {code}
>  
> Then,
> {code:java}
> org.apache.phoenix.mapreduce.index.automation.PhoenixMRJobSubmitter{code}
> is run.
> After that, The following queries then lead to a different output:
> {code:java}
> SELECT INDEX_STATE FROM SYSTEM.CATALOG WHERE 
> TABLE_NAME='us_population_gv_gi_1';{code}
> Output:
> {code:java}
> +--+
> | INDEX_STATE |
> +--+
> +--+
> No rows selected (0.076 seconds){code}
> and
> {code:java}
> SELECT INDEX_STATE FROM SYSTEM.CATALOG WHERE 
> TABLE_NAME='US_POPULATION_GV_GI_1';{code}
> Output:
> {code:java}
> +--+
> | INDEX_STATE |
> +--+
> | b |
> | |
> | |
> | |
> | |
> +--+
> 5 rows selected (0.063 seconds){code}
> Only the case in which the table is mentioned in different in the above 
> queries.
> Need an appropriate resolution for this.



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Resolved] (PHOENIX-6723) Phoenix client unable to connect on JDK14+

2022-06-07 Thread Geoffrey Jacoby (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby resolved PHOENIX-6723.
--
Release Note: Phoenix now internally depends on Apache ZooKeeper 3.5.7, and 
Apache Curator 4.2
  Resolution: Fixed

Merged to master. Thanks for the patch, and welcome, [~wendigo]!

> Phoenix client unable to connect on JDK14+
> --
>
> Key: PHOENIX-6723
> URL: https://issues.apache.org/jira/browse/PHOENIX-6723
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Mateusz Gajewski
>Assignee: Mateusz Gajewski
>Priority: Major
> Fix For: 5.2.0
>
>
> Zookeeper versions prior to 3.5.7 are broken on JDK 14+ due to the incorrect 
> usage of InetSocketAddress.toString.
>  
> For reference, see: 
> [https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8232369]



--
This message was sent by Atlassian Jira
(v8.20.7#820007)