[jira] [Updated] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5265:
--
Fix Version/s: 4.16.0
   5.1.0

> [UMBRELLA] Phoenix Test should use object based Plan for result comparison 
> instead of using hard-corded comparison
> --
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertFalse(rs.next());
> // Disable stats
> conn.createStatement().execute("ALTER TABLE " + fullTableName + 
> " SET " + PhoenixDatabaseMetaData.GUIDE_

[jira] [Resolved] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani resolved PHOENIX-5265.
---
Release Note: New API for Explain plan queries that can be used for 
comparison of individual plan attributes.
  Resolution: Fixed

> [UMBRELLA] Phoenix Test should use object based Plan for result comparison 
> instead of using hard-corded comparison
> --
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertFalse(rs.next());
> // Disable stats
> conn.createStatement().execut

[jira] [Updated] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-5265:
--
Release Note: 
New API for Explain plan queries that can be used for comparison of individual 
plan attributes.


  was:New API for Explain plan queries that can be used for comparison of 
individual plan attributes.


> [UMBRELLA] Phoenix Test should use object based Plan for result comparison 
> instead of using hard-corded comparison
> --
>
> Key: PHOENIX-5265
> URL: https://issues.apache.org/jira/browse/PHOENIX-5265
> Project: Phoenix
>  Issue Type: Improvement
> Environment: {code:java}
> // code placeholder
> @Test
> public void testWithMultiCF() throws Exception {
> int nRows = 20;
> Connection conn = getConnection(0);
> PreparedStatement stmt;
> conn.createStatement().execute(
> "CREATE TABLE " + fullTableName
> + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v 
> INTEGER NULL, d.v INTEGER NULL) "
> + tableDDLOptions );
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " 
> VALUES(?,?, ?, ?, ?)");
> byte[] val = new byte[250];
> for (int i = 0; i < nRows; i++) {
> stmt.setString(1, Character.toString((char)('a' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.setInt(4, i);
> stmt.setInt(5, i);
> stmt.executeUpdate();
> }
> conn.commit();
> stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, 
> d.v) VALUES(?,?,?)");
> for (int i = 0; i < 5; i++) {
> stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + 
> Bytes.toString(val));
> stmt.setInt(2, i);
> stmt.setInt(3, i);
> stmt.executeUpdate();
> }
> conn.commit();
> ResultSet rs;
> String actualExplainPlan;
> collectStatistics(conn, fullTableName);
> List keyRanges = getAllSplits(conn, fullTableName);
> assertEquals(26, keyRanges.size());
> rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + 
> fullTableName);
> actualExplainPlan = QueryUtil.getExplainPlan(rs);
> assertEquals(
> "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" 
> : "14190" ) : 
> (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? 
> "25320" : "12420") +
> " BYTES PARALLEL 1-WAY FULL SCAN OVER " + 
> physicalTableName,
> actualExplainPlan);
> ConnectionQueryServices services = 
> conn.unwrap(PhoenixConnection.class).getQueryServices();
> List regions = 
> services.getAllTableRegions(Bytes.toBytes(physicalTableName));
> assertEquals(1, regions.size());
> collectStatistics(conn, fullTableName, Long.toString(1000));
> keyRanges = getAllSplits(conn, fullTableName);
> boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded;
> boolean hasShadowCells = 
> TransactionFactory.Provider.OMID.name().equals(transactionProvider);
> assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 
> : 13, keyRanges.size());
> rs = conn
> .createStatement()
> .executeQuery(
> "SELECT 
> COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from 
> \"SYSTEM\".STATS where PHYSICAL_NAME = '"
> + physicalTableName + "' GROUP BY COLUMN_FAMILY 
> ORDER BY COLUMN_FAMILY");
> assertTrue(rs.next());
> assertEquals("A", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells 
> ? 25320 : 12420, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 
> : 12, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("B", rs.getString(1));
> assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? 
> 11260 : 5540, rs.getInt(3));
> assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : 
> 5, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("C", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> assertTrue(rs.next());
> assertEquals("D", rs.getString(1));
> assertEquals(25, rs.getInt(2));
> assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? 
> 14085 : 6930, rs.getInt(3));
> assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4));
> a

[jira] [Updated] (PHOENIX-6282) Generate PB files inline with build and remove checked in files

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6282?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6282:
--
Release Note: We no longer have generated protobuf Java files available in 
source code. These files are expected to be generated inline with mvn build. We 
have also used an optimization with the plugin to ensure protoc is not invoked 
with mvn build if no .proto file is updated between two consecutive builds.

> Generate PB files inline with build and remove checked in files
> ---
>
> Key: PHOENIX-6282
> URL: https://issues.apache.org/jira/browse/PHOENIX-6282
> Project: Phoenix
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> We can use a new PB maven plugin to generate PB files with build rather than 
> checking in generated PB files (~2 MB as of now) in source code. The plugin 
> also provides an optimization for protoc invocation. With checkStaleness 
> property, protoc will not be invoked if no .proto file is changed. Only 
> separate build will invoke protoc and generate PB files inline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6293) PHOENIX-6193 breaks projects depending on the phoenix-client artifact

2021-01-04 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6293:


 Summary: PHOENIX-6193 breaks projects depending on the 
phoenix-client artifact
 Key: PHOENIX-6293
 URL: https://issues.apache.org/jira/browse/PHOENIX-6293
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0, 4.16.0
Reporter: Josh Elser
Assignee: Istvan Toth


PHOENIX-6193 has added the phoenix-client-parent module to consolidate the 
common parameters for the phoenix-client variants.
However, phoenix-client-parent is not explicitly listed as a module in the root 
pom, so it doesn't get installed/deployed, and dependent projects cannot 
resolve the phoenix-client because of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6293) PHOENIX-6193 breaks projects depending on the phoenix-client artifact

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6293:
-
Description: 
PHOENIX-6193 has added the phoenix-client-parent module to consolidate the 
common parameters for the phoenix-client variants.
However, phoenix-client-parent is not explicitly listed as a module in the root 
pom, so it doesn't get installed/deployed, and dependent projects cannot 
resolve the phoenix-client artifact because of this.

  was:
PHOENIX-6193 has added the phoenix-client-parent module to consolidate the 
common parameters for the phoenix-client variants.
However, phoenix-client-parent is not explicitly listed as a module in the root 
pom, so it doesn't get installed/deployed, and dependent projects cannot 
resolve the phoenix-client because of this.


> PHOENIX-6193 breaks projects depending on the phoenix-client artifact
> -
>
> Key: PHOENIX-6293
> URL: https://issues.apache.org/jira/browse/PHOENIX-6293
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Blocker
>
> PHOENIX-6193 has added the phoenix-client-parent module to consolidate the 
> common parameters for the phoenix-client variants.
> However, phoenix-client-parent is not explicitly listed as a module in the 
> root pom, so it doesn't get installed/deployed, and dependent projects cannot 
> resolve the phoenix-client artifact because of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6294) javax.servlet relocation added by PHOENIX-6151 breaks PQS

2021-01-04 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6294:


 Summary: javax.servlet relocation added by PHOENIX-6151 breaks PQS
 Key: PHOENIX-6294
 URL: https://issues.apache.org/jira/browse/PHOENIX-6294
 Project: Phoenix
  Issue Type: Bug
  Components: queryserver
Affects Versions: queryserver-6.0.0
Reporter: Josh Elser
Assignee: Istvan Toth


PHOENIX-6151 relocates the javax.servlet package, which Avatica doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6294) javax.servlet relocation added by PHOENIX-6151 breaks PQS

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-6294:
-
Priority: Blocker  (was: Major)

> javax.servlet relocation added by PHOENIX-6151 breaks PQS
> -
>
> Key: PHOENIX-6294
> URL: https://issues.apache.org/jira/browse/PHOENIX-6294
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-6.0.0
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Blocker
>
> PHOENIX-6151 relocates the javax.servlet package, which Avatica doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6295) Fix non-static inner classes for better memory management

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned PHOENIX-6295:
-

Assignee: Viraj Jasani

> Fix non-static inner classes for better memory management
> -
>
> Key: PHOENIX-6295
> URL: https://issues.apache.org/jira/browse/PHOENIX-6295
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>
> If an inner class does not need to reference its enclosing instance, it can 
> be static. This prevents a common cause of memory leaks and uses less memory 
> per instance of the class (enclosing).
> Came across StatsDeleteHandler as a non static inner class defined in 
> MetaDataEndpointImpl without holding any implicit reference to 
> MetaDataEndpointImpl. Taking this opportunity to  find other non-static inner 
> classes that are not holding implicit reference to their respective enclosing 
> instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6295) Fix non-static inner classes for better memory management

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6295:
--
Fix Version/s: 4.16.0
   5.1.0

> Fix non-static inner classes for better memory management
> -
>
> Key: PHOENIX-6295
> URL: https://issues.apache.org/jira/browse/PHOENIX-6295
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
>
> If an inner class does not need to reference its enclosing instance, it can 
> be static. This prevents a common cause of memory leaks and uses less memory 
> per instance of the class (enclosing).
> Came across StatsDeleteHandler as a non static inner class defined in 
> MetaDataEndpointImpl without holding any implicit reference to 
> MetaDataEndpointImpl. Taking this opportunity to  find other non-static inner 
> classes that are not holding implicit reference to their respective enclosing 
> instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6295) Fix non-static inner classes for better memory management

2021-01-04 Thread Viraj Jasani (Jira)
Viraj Jasani created PHOENIX-6295:
-

 Summary: Fix non-static inner classes for better memory management
 Key: PHOENIX-6295
 URL: https://issues.apache.org/jira/browse/PHOENIX-6295
 Project: Phoenix
  Issue Type: Bug
Reporter: Viraj Jasani


If an inner class does not need to reference its enclosing instance, it can be 
static. This prevents a common cause of memory leaks and uses less memory per 
instance of the class (enclosing).

Came across StatsDeleteHandler as a non static inner class defined in 
MetaDataEndpointImpl without holding any implicit reference to 
MetaDataEndpointImpl. Taking this opportunity to  find other non-static inner 
classes that are not holding implicit reference to their respective enclosing 
instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6295) Fix non-static inner classes for better memory management

2021-01-04 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated PHOENIX-6295:
--
Affects Version/s: 5.0.0
   4.15.0

> Fix non-static inner classes for better memory management
> -
>
> Key: PHOENIX-6295
> URL: https://issues.apache.org/jira/browse/PHOENIX-6295
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Viraj Jasani
>Priority: Major
>
> If an inner class does not need to reference its enclosing instance, it can 
> be static. This prevents a common cause of memory leaks and uses less memory 
> per instance of the class (enclosing).
> Came across StatsDeleteHandler as a non static inner class defined in 
> MetaDataEndpointImpl without holding any implicit reference to 
> MetaDataEndpointImpl. Taking this opportunity to  find other non-static inner 
> classes that are not holding implicit reference to their respective enclosing 
> instances.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6296) Synchronize @Parameters, @BeforeClass and @AfterClass methods take 2

2021-01-04 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-6296:


 Summary: Synchronize @Parameters, @BeforeClass and @AfterClass 
methods take 2
 Key: PHOENIX-6296
 URL: https://issues.apache.org/jira/browse/PHOENIX-6296
 Project: Phoenix
  Issue Type: Bug
  Components: core
Affects Versions: 5.1.0, 4.16.0
Reporter: Istvan Toth
Assignee: Istvan Toth


The work done for PHOENIX-5554 has been undone, we have a lot of unsynchronized 
methods again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6294) javax.servlet relocation added by PHOENIX-6151 breaks PQS

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-6294.
--
Fix Version/s: queryserver-6.0.0
   Resolution: Fixed

Committed.
Thanks for the patch [~elserj].

> javax.servlet relocation added by PHOENIX-6151 breaks PQS
> -
>
> Key: PHOENIX-6294
> URL: https://issues.apache.org/jira/browse/PHOENIX-6294
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-6.0.0
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: queryserver-6.0.0
>
>
> PHOENIX-6151 relocates the javax.servlet package, which Avatica doesn't. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6253) Change shaded connector jar naming convention

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-6253.
--
Fix Version/s: connectors-6.0.0
   Resolution: Fixed

Committed.
Thanks for the review [~elserj].

> Change shaded connector jar naming convention
> -
>
> Key: PHOENIX-6253
> URL: https://issues.apache.org/jira/browse/PHOENIX-6253
> Project: Phoenix
>  Issue Type: Improvement
>  Components: connectors
>Affects Versions: connectors-6.0.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Fix For: connectors-6.0.0
>
>
> Currently we distinguish the shaded and unshaded connector JARs by moving the 
> version about. I.e phoenix5-hive-6.0.0 jar is the unshaded JAR, while 
> phoenix5-6.0.0-hive.jar is the shaded one.
> This is unintuitive, and we have already dropped this convention in core.
> We could use something like:
> -phoenix5-spark-6.0.0.jar: the unshaded connector JAR-
> -phoenix5-spark-connector-6.0.0.jar : the default shaded connector JAR-
> -phoenix5-spark-connector-byo-hbase-6.0.0.jar : alternative shaded connector 
> JAR.-
> phoenix5-spark-6.0.0.jar: the unshaded connector JAR
> phoenix5-spark-6.0.0-shaded.jar : the default shaded connector JAR
> phoenix5-spark-6.0.0-shaded-byo-hbase.jar : alternative shaded connector JAR.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6296) Synchronize @Parameters, @BeforeClass and @AfterClass methods take 2

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-6296.
--
Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

Committed to master and 4.x
Thanks for the reviews [~vjasani] and [~gjacoby].

Unfortunately, as the Yetus run for master had a startup error, it seems that 
this doesn't fix the miniCluster startup issue.

> Synchronize @Parameters, @BeforeClass and @AfterClass methods take 2
> 
>
> Key: PHOENIX-6296
> URL: https://issues.apache.org/jira/browse/PHOENIX-6296
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
>
> The work done for PHOENIX-5554 has been undone, we have a lot of 
> unsynchronized methods again.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[ANNOUNCE] New Phoenix committer Richárd Antal

2021-01-04 Thread Ankit Singhal
On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richárd
Antal
has accepted the PMC's invitation to become a committer on Apache Phoenix.

We appreciate all of the great contributions Richárd has made to the
community thus far and we look forward to his continued involvement.

Congratulations and welcome, Richárd Antal!


Re: [ANNOUNCE] New Phoenix committer Richárd Antal

2021-01-04 Thread Geoffrey Jacoby
Welcome, Richard, and congratulations!

Geoffrey

On Mon, Jan 4, 2021 at 2:31 PM Ankit Singhal  wrote:

> On behalf of the Apache Phoenix PMC, I'm pleased to announce that Richárd
> Antal
> has accepted the PMC's invitation to become a committer on Apache Phoenix.
>
> We appreciate all of the great contributions Richárd has made to the
> community thus far and we look forward to his continued involvement.
>
> Congratulations and welcome, Richárd Antal!
>


[jira] [Created] (PHOENIX-6297) Fix IndexMetadataIT.testAsyncRebuildAll test flapper

2021-01-04 Thread Xinyi Yan (Jira)
Xinyi Yan created PHOENIX-6297:
--

 Summary: Fix IndexMetadataIT.testAsyncRebuildAll test flapper
 Key: PHOENIX-6297
 URL: https://issues.apache.org/jira/browse/PHOENIX-6297
 Project: Phoenix
  Issue Type: Test
Reporter: Xinyi Yan
Assignee: Xinyi Yan
 Fix For: 4.16.0


Based on the Jenkins log, we need to fix it before the 4.16 release.
{code:java}

Error Messageexpected:<[COMPLE]TED> but 
was:<[STAR]TED>Stacktraceorg.junit.ComparisonFailure: expected:<[COMPLE]TED> 
but was:<[STAR]TED>
at org.junit.Assert.assertEquals(Assert.java:117)
at org.junit.Assert.assertEquals(Assert.java:146)
at 
org.apache.phoenix.end2end.DropTableWithViewsIT.assertTaskColumns(DropTableWithViewsIT.java:184)
at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:691)
{code}
 
{code:java}
Error MessageRan out of time waiting for index state to become ACTIVE last seen 
actual state is BUILDINGStacktracejava.lang.AssertionError: Ran out of time 
waiting for index state to become ACTIVE last seen actual state is BUILDING
at org.junit.Assert.fail(Assert.java:89)
at 
org.apache.phoenix.util.TestUtil.waitForIndexState(TestUtil.java:1081)
at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:688)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6298) Use timestamp of PENDING_DISABLE_COUNT to calculate elapse time for PENDING_DISABLE state

2021-01-04 Thread Ankit Singhal (Jira)
Ankit Singhal created PHOENIX-6298:
--

 Summary: Use timestamp of PENDING_DISABLE_COUNT to calculate 
elapse time for PENDING_DISABLE state
 Key: PHOENIX-6298
 URL: https://issues.apache.org/jira/browse/PHOENIX-6298
 Project: Phoenix
  Issue Type: Bug
Reporter: Ankit Singhal


Instead of taking indexDisableTimestamp to calculate the elapsed time, we 
should be considering the last time we incr/decremented the counter for 
PENDING_DISABLE_COUNT. as if the application writes failures span more than the 
default threshold of 30 seconds, the index will unnecessarily get disabled even 
though the client could have retried and made it active.

{code}
long elapsedSinceDisable = 
EnvironmentEdgeManager.currentTimeMillis() - Math.abs(indexDisableTimestamp);

// on an index write failure, the server side transitions to PENDING_DISABLE, 
then the client
// retries, and after retries are exhausted, disables the 
index
if (indexState == PIndexState.PENDING_DISABLE) {
if (elapsedSinceDisable > pendingDisableThreshold) {
// too long in PENDING_DISABLE - client didn't 
disable the index, so we do it here
IndexUtil.updateIndexState(conn, 
indexTableFullName, PIndexState.DISABLE, indexDisableTimestamp);
}
continue;
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6297) Fix IndexMetadataIT.testAsyncRebuildAll test flapper

2021-01-04 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6297:
---
Attachment: PHOENIX-6297.master.patch

> Fix IndexMetadataIT.testAsyncRebuildAll test flapper
> 
>
> Key: PHOENIX-6297
> URL: https://issues.apache.org/jira/browse/PHOENIX-6297
> Project: Phoenix
>  Issue Type: Test
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6297.master.patch
>
>
> Based on the Jenkins log, we need to fix it before the 4.16 release.
> {code:java}
> Error Messageexpected:<[COMPLE]TED> but 
> was:<[STAR]TED>Stacktraceorg.junit.ComparisonFailure: expected:<[COMPLE]TED> 
> but was:<[STAR]TED>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.phoenix.end2end.DropTableWithViewsIT.assertTaskColumns(DropTableWithViewsIT.java:184)
>   at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:691)
> {code}
>  
> {code:java}
> Error MessageRan out of time waiting for index state to become ACTIVE last 
> seen actual state is BUILDINGStacktracejava.lang.AssertionError: Ran out of 
> time waiting for index state to become ACTIVE last seen actual state is 
> BUILDING
>   at org.junit.Assert.fail(Assert.java:89)
>   at 
> org.apache.phoenix.util.TestUtil.waitForIndexState(TestUtil.java:1081)
>   at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:688)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6297) Fix IndexMetadataIT.testAsyncRebuildAll test flapper

2021-01-04 Thread Xinyi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xinyi Yan updated PHOENIX-6297:
---
Attachment: PHOENIX-6297.4.x.patch

> Fix IndexMetadataIT.testAsyncRebuildAll test flapper
> 
>
> Key: PHOENIX-6297
> URL: https://issues.apache.org/jira/browse/PHOENIX-6297
> Project: Phoenix
>  Issue Type: Test
>Reporter: Xinyi Yan
>Assignee: Xinyi Yan
>Priority: Major
> Fix For: 4.16.0
>
> Attachments: PHOENIX-6297.4.x.patch, PHOENIX-6297.master.patch
>
>
> Based on the Jenkins log, we need to fix it before the 4.16 release.
> {code:java}
> Error Messageexpected:<[COMPLE]TED> but 
> was:<[STAR]TED>Stacktraceorg.junit.ComparisonFailure: expected:<[COMPLE]TED> 
> but was:<[STAR]TED>
>   at org.junit.Assert.assertEquals(Assert.java:117)
>   at org.junit.Assert.assertEquals(Assert.java:146)
>   at 
> org.apache.phoenix.end2end.DropTableWithViewsIT.assertTaskColumns(DropTableWithViewsIT.java:184)
>   at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:691)
> {code}
>  
> {code:java}
> Error MessageRan out of time waiting for index state to become ACTIVE last 
> seen actual state is BUILDINGStacktracejava.lang.AssertionError: Ran out of 
> time waiting for index state to become ACTIVE last seen actual state is 
> BUILDING
>   at org.junit.Assert.fail(Assert.java:89)
>   at 
> org.apache.phoenix.util.TestUtil.waitForIndexState(TestUtil.java:1081)
>   at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testAsyncRebuildAll(IndexMetadataIT.java:688)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6293) PHOENIX-6193 breaks projects depending on the phoenix-client artifact

2021-01-04 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth resolved PHOENIX-6293.
--
Fix Version/s: 4.16.0
   5.1.0
   Resolution: Fixed

Committed to master and 4.x.
Thanks for the review [~vjasani] and [~elserj]

> PHOENIX-6193 breaks projects depending on the phoenix-client artifact
> -
>
> Key: PHOENIX-6293
> URL: https://issues.apache.org/jira/browse/PHOENIX-6293
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.1.0, 4.16.0
>Reporter: Josh Elser
>Assignee: Istvan Toth
>Priority: Blocker
> Fix For: 5.1.0, 4.16.0
>
>
> PHOENIX-6193 has added the phoenix-client-parent module to consolidate the 
> common parameters for the phoenix-client variants.
> However, phoenix-client-parent is not explicitly listed as a module in the 
> root pom, so it doesn't get installed/deployed, and dependent projects cannot 
> resolve the phoenix-client artifact because of this.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6050) Set properties is invalid in client

2021-01-04 Thread Chao Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Wang updated PHOENIX-6050:
---
Attachment: PHOENIX-6050.master.002.patch

> Set properties is invalid in client
> ---
>
> Key: PHOENIX-6050
> URL: https://issues.apache.org/jira/browse/PHOENIX-6050
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 4.13.1, 5.0.0
> Environment: phoenix 4.13.1
> hbase 1.3.1
>Reporter: Chao Wang
>Assignee: Chao Wang
>Priority: Major
> Fix For: 5.1.0
>
> Attachments: PHOENIX-6050-4.13-HBase-1.3.patch, 
> PHOENIX-6050.master.001.patch, PHOENIX-6050.master.002.patch
>
>
> I set properties in client, which are "phoenix.query.threadPoolSize", but 
> this is invalid.  ThreadPool always use default value (128). 
> code is:
> Properties properties = new Properties();
>  properties.setProperty("phoenix.query.threadPoolSize","300");
>  PropertiesResolve phoenixpr = new PropertiesResolve();
>  String phoenixdriver = 
> phoenixpr.readMapByKey("com/main/SyncData.properties", "phoenix_driver");
>  String phoenixjdbc = phoenixpr.readMapByKey("com/main/SyncData.properties", 
> "phoenix_jdbc");
>  Class.forName(phoenixdriver);
>  return DriverManager.getConnection(phoenixjdbc,properties);
> throw is:
> Error: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647] 
> (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: Task 
> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask@6e91893 rejected 
> from org.apache.phoenix.job.JobManager$1@26ae880a[Running, pool size = 128, 
> active threads = 128, queued tasks = 5000, completed tasks = 36647]
> at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:120)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:1024)
> at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:916)
> ^Reason:^
> I find PhoenixDriver create threadpool before init config from properties. 
> when create threadpool ,  config is always default value .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)