[jira] [Created] (PHOENIX-6492) Validate SQL with Minicluster before Synthesizing with SchemaTool
Swaroopa Kadam created PHOENIX-6492: --- Summary: Validate SQL with Minicluster before Synthesizing with SchemaTool Key: PHOENIX-6492 URL: https://issues.apache.org/jira/browse/PHOENIX-6492 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 4.17.0, 5.2.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6454) Add feature to SchemaTool to get the DDL in specification mode
[ https://issues.apache.org/jira/browse/PHOENIX-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-6454. - Resolution: Fixed > Add feature to SchemaTool to get the DDL in specification mode > -- > > Key: PHOENIX-6454 > URL: https://issues.apache.org/jira/browse/PHOENIX-6454 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.17.0, 5.2.0 > > > Currently, SchemExtractionTool uses PTable representation to get the > effective DDL on the cluster. > Rename SchemaExtractionTool to SchemaTool, add a feature that accepts create > DDL and alter DDL to give effective DDL without using PTable implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6467) Support extract/synth indexes on expression and function in SchemaTool
[ https://issues.apache.org/jira/browse/PHOENIX-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6467: Summary: Support extract/synth indexes on expression and function in SchemaTool (was: Support indexes on expression and function in SchemaTool) > Support extract/synth indexes on expression and function in SchemaTool > -- > > Key: PHOENIX-6467 > URL: https://issues.apache.org/jira/browse/PHOENIX-6467 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.1, 4.17.0, 4.16.2 > > > currently, SchemaTool supports basic indexes with SYNTH and EXTRACT mode. > It should also support indexes when PK is on a column expression or > functions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6469) Support CREATE SEQUENCE and CREATE FUNCTION DDL in SchemaTool
Swaroopa Kadam created PHOENIX-6469: --- Summary: Support CREATE SEQUENCE and CREATE FUNCTION DDL in SchemaTool Key: PHOENIX-6469 URL: https://issues.apache.org/jira/browse/PHOENIX-6469 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Support CREATE SEQUENCE and CREATE FUNCTION DDL in schemaTool -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6468) Provide verify mode in SchemaTool to compare SQL on cluster with provided SQL
Swaroopa Kadam created PHOENIX-6468: --- Summary: Provide verify mode in SchemaTool to compare SQL on cluster with provided SQL Key: PHOENIX-6468 URL: https://issues.apache.org/jira/browse/PHOENIX-6468 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam SchemaTool should accept a DDL statement to compare against one extracted from the cluster. This comparison will help monitor the schema on the cluster effectively. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6467) Support indexes on expression and function in SchemaTool
[ https://issues.apache.org/jira/browse/PHOENIX-6467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6467: Fix Version/s: 4.16.2 4.17.0 5.1.1 > Support indexes on expression and function in SchemaTool > > > Key: PHOENIX-6467 > URL: https://issues.apache.org/jira/browse/PHOENIX-6467 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.1, 4.17.0, 4.16.2 > > > currently, SchemaTool supports basic indexes with SYNTH and EXTRACT mode. > It should also support indexes when PK is on a column expression or > functions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6467) Support indexes on expression and function in SchemaTool
Swaroopa Kadam created PHOENIX-6467: --- Summary: Support indexes on expression and function in SchemaTool Key: PHOENIX-6467 URL: https://issues.apache.org/jira/browse/PHOENIX-6467 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam currently, SchemaTool supports basic indexes with SYNTH and EXTRACT mode. It should also support indexes when PK is on a column expression or functions. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6271) Effective DDL generated by SchemaExtractionTool should maintain the order of PK and other columns
[ https://issues.apache.org/jira/browse/PHOENIX-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-6271. - Resolution: Fixed > Effective DDL generated by SchemaExtractionTool should maintain the order of > PK and other columns > - > > Key: PHOENIX-6271 > URL: https://issues.apache.org/jira/browse/PHOENIX-6271 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Minor > Fix For: 5.1.1, 4.16.1 > > > SchemaExtractionTool is used to generate effective DDL which can be then > compared with the DDL on the cluster to perform schema monitoring. > This won't affect the monitoring part but would be good to have the PR order > in place so that effective DDL can be used for creating the entity for the > first time in a new environment. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6460) Improve the optimizer to consider all plans and prefer regular over reverse scans
[ https://issues.apache.org/jira/browse/PHOENIX-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6460: Description: In QueryOptimizerTest, following test fails, as it uses IDX over view {code:java} @Test public void testChooseIndexOverTable_diffOrder() throws Exception { Connection conn = DriverManager.getConnection(getUrl()); conn.createStatement().execute("CREATE TABLE t (k INTEGER NOT NULL, v1 VARCHAR NOT NULL, v2 VARCHAR CONSTRAINT" + " PK PRIMARY KEY (k, v1)) IMMUTABLE_ROWS=true"); conn.createStatement().execute("CREATE VIEW v(v4 VARCHAR) AS SELECT * FROM t"); conn.createStatement().execute("CREATE INDEX idx ON v(v1 DESC) INCLUDE (v4)"); PhoenixStatement stmt = conn.createStatement().unwrap(PhoenixStatement.class); QueryPlan plan = stmt.optimizeQuery("SELECT v1 FROM v ORDER BY v1"); assertEquals("V", plan.getTableRef().getTable().getTableName().getString()); } {code} was: In QueryOptimizerTest, following test fails, as it uses IDX over view @Test public void testChooseIndexOverTable_diffOrder() throws Exception { Connection conn = DriverManager.getConnection(getUrl()); conn.createStatement().execute("CREATE TABLE t (k INTEGER NOT NULL, v1 VARCHAR NOT NULL, v2 VARCHAR CONSTRAINT" + " PK PRIMARY KEY (k, v1)) IMMUTABLE_ROWS=true"); conn.createStatement().execute("CREATE VIEW v(v4 VARCHAR) AS SELECT * FROM t"); conn.createStatement().execute("CREATE INDEX idx ON v(v1 DESC) INCLUDE (v4)"); PhoenixStatement stmt = conn.createStatement().unwrap(PhoenixStatement.class); QueryPlan plan = stmt.optimizeQuery("SELECT v1 FROM v ORDER BY v1"); assertEquals("V", plan.getTableRef().getTable().getTableName().getString()); } > Improve the optimizer to consider all plans and prefer regular over reverse > scans > - > > Key: PHOENIX-6460 > URL: https://issues.apache.org/jira/browse/PHOENIX-6460 > Project: Phoenix > Issue Type: Improvement >Reporter: Swaroopa Kadam >Priority: Major > Fix For: 5.1.1, 4.17.0 > > > In QueryOptimizerTest, following test fails, as it uses IDX over view > > {code:java} > @Test > public void testChooseIndexOverTable_diffOrder() throws Exception > { > Connection conn = DriverManager.getConnection(getUrl()); > conn.createStatement().execute("CREATE TABLE t (k INTEGER NOT NULL, v1 > VARCHAR NOT NULL, v2 VARCHAR CONSTRAINT" + " PK PRIMARY KEY (k, v1)) > IMMUTABLE_ROWS=true"); conn.createStatement().execute("CREATE VIEW v(v4 > VARCHAR) AS SELECT * FROM t"); conn.createStatement().execute("CREATE INDEX > idx ON v(v1 DESC) INCLUDE (v4)"); > PhoenixStatement stmt = > conn.createStatement().unwrap(PhoenixStatement.class); > QueryPlan plan = stmt.optimizeQuery("SELECT v1 FROM v ORDER BY v1"); > assertEquals("V", plan.getTableRef().getTable().getTableName().getString()); > } > {code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6460) Improve the optimizer to consider all plans and prefer regular over reverse scans
Swaroopa Kadam created PHOENIX-6460: --- Summary: Improve the optimizer to consider all plans and prefer regular over reverse scans Key: PHOENIX-6460 URL: https://issues.apache.org/jira/browse/PHOENIX-6460 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Fix For: 5.1.1, 4.17.0 In QueryOptimizerTest, following test fails, as it uses IDX over view @Test public void testChooseIndexOverTable_diffOrder() throws Exception { Connection conn = DriverManager.getConnection(getUrl()); conn.createStatement().execute("CREATE TABLE t (k INTEGER NOT NULL, v1 VARCHAR NOT NULL, v2 VARCHAR CONSTRAINT" + " PK PRIMARY KEY (k, v1)) IMMUTABLE_ROWS=true"); conn.createStatement().execute("CREATE VIEW v(v4 VARCHAR) AS SELECT * FROM t"); conn.createStatement().execute("CREATE INDEX idx ON v(v1 DESC) INCLUDE (v4)"); PhoenixStatement stmt = conn.createStatement().unwrap(PhoenixStatement.class); QueryPlan plan = stmt.optimizeQuery("SELECT v1 FROM v ORDER BY v1"); assertEquals("V", plan.getTableRef().getTable().getTableName().getString()); } -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6454) Add feature to SchemaTool to get the DDL in specification mode
[ https://issues.apache.org/jira/browse/PHOENIX-6454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6454: Fix Version/s: 5.2.0 4.17.0 > Add feature to SchemaTool to get the DDL in specification mode > -- > > Key: PHOENIX-6454 > URL: https://issues.apache.org/jira/browse/PHOENIX-6454 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.17.0, 5.2.0 > > > Currently, SchemExtractionTool uses PTable representation to get the > effective DDL on the cluster. > Rename SchemaExtractionTool to SchemaTool, add a feature that accepts create > DDL and alter DDL to give effective DDL without using PTable implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6454) Add feature to SchemaTool to get the DDL in specification mode
Swaroopa Kadam created PHOENIX-6454: --- Summary: Add feature to SchemaTool to get the DDL in specification mode Key: PHOENIX-6454 URL: https://issues.apache.org/jira/browse/PHOENIX-6454 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Currently, SchemExtractionTool uses PTable representation to get the effective DDL on the cluster. Rename SchemaExtractionTool to SchemaTool, add a feature that accepts create DDL and alter DDL to give effective DDL without using PTable implementation. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6271) Effective DDL generated by SchemaExtractionTool should maintain the order of PK and other columns
[ https://issues.apache.org/jira/browse/PHOENIX-6271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6271: Fix Version/s: 4.16.1 5.1.1 > Effective DDL generated by SchemaExtractionTool should maintain the order of > PK and other columns > - > > Key: PHOENIX-6271 > URL: https://issues.apache.org/jira/browse/PHOENIX-6271 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Minor > Fix For: 5.1.1, 4.16.1 > > > SchemaExtractionTool is used to generate effective DDL which can be then > compared with the DDL on the cluster to perform schema monitoring. > This won't affect the monitoring part but would be good to have the PR order > in place so that effective DDL can be used for creating the entity for the > first time in a new environment. > > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6344) CASCADE on ALTER should NOOP when there are no secondary indexes
[ https://issues.apache.org/jira/browse/PHOENIX-6344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-6344. - Resolution: Implemented > CASCADE on ALTER should NOOP when there are no secondary indexes > > > Key: PHOENIX-6344 > URL: https://issues.apache.org/jira/browse/PHOENIX-6344 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Minor > Fix For: 5.1.0, 4.16.1, 4.17.0 > > > When a table/view does not have a secondary index, using cascade in the ALTER > TABLE/VIEW .. ADD should continue with default behavior (of only > adding the column to table/view like a regular alter statement) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6345) Effective DDL of Indexes from SchemaExtractionTool should include TTL, replication_scope, KEEP_DELETED_CELLS
Swaroopa Kadam created PHOENIX-6345: --- Summary: Effective DDL of Indexes from SchemaExtractionTool should include TTL, replication_scope, KEEP_DELETED_CELLS Key: PHOENIX-6345 URL: https://issues.apache.org/jira/browse/PHOENIX-6345 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 4.16.1, 4.17.0 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6344) CASCADE on ALTER should NOOP when there are no secondary indexes
Swaroopa Kadam created PHOENIX-6344: --- Summary: CASCADE on ALTER should NOOP when there are no secondary indexes Key: PHOENIX-6344 URL: https://issues.apache.org/jira/browse/PHOENIX-6344 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.16.1, 4.17.0 When a table/view does not have a secondary index, using cascade in the ALTER TABLE/VIEW .. ADD should continue with default behavior (of only adding the column to table/view like a regular alter statement) -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
[ https://issues.apache.org/jira/browse/PHOENIX-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-6148. - Resolution: Fixed > [SchemaExtractionTool]DDL parsing exception in Phoenix in view name > --- > > Key: PHOENIX-6148 > URL: https://issues.apache.org/jira/browse/PHOENIX-6148 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6148.4.x.add.patch, PHOENIX-6148.4.x.add2.patch, > PHOENIX-6148.4.x.patch, PHOENIX-6148.master.add.patch, PHOENIX-6148.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): > Syntax error. Encountered ".0" at line 1, column 28. > 585 at > org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) > 586 at > org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) > Seeing this when parsing create view statement generated from > SchemaExtractionTool as below: > CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
[ https://issues.apache.org/jira/browse/PHOENIX-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6148: Attachment: (was: PHOENIX-6148.4.x.add1.patch) > [SchemaExtractionTool]DDL parsing exception in Phoenix in view name > --- > > Key: PHOENIX-6148 > URL: https://issues.apache.org/jira/browse/PHOENIX-6148 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6148.4.x.add.patch, PHOENIX-6148.4.x.add2.patch, > PHOENIX-6148.4.x.patch, PHOENIX-6148.master.add.patch, PHOENIX-6148.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): > Syntax error. Encountered ".0" at line 1, column 28. > 585 at > org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) > 586 at > org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) > Seeing this when parsing create view statement generated from > SchemaExtractionTool as below: > CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
[ https://issues.apache.org/jira/browse/PHOENIX-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6148: Attachment: PHOENIX-6148.master.add.patch PHOENIX-6148.4.x.add2.patch > [SchemaExtractionTool]DDL parsing exception in Phoenix in view name > --- > > Key: PHOENIX-6148 > URL: https://issues.apache.org/jira/browse/PHOENIX-6148 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6148.4.x.add.patch, PHOENIX-6148.4.x.add2.patch, > PHOENIX-6148.4.x.patch, PHOENIX-6148.master.add.patch, PHOENIX-6148.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): > Syntax error. Encountered ".0" at line 1, column 28. > 585 at > org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) > 586 at > org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) > Seeing this when parsing create view statement generated from > SchemaExtractionTool as below: > CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
[ https://issues.apache.org/jira/browse/PHOENIX-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6148: Attachment: (was: PHOENIX-6148.4.x.add1.patch) > [SchemaExtractionTool]DDL parsing exception in Phoenix in view name > --- > > Key: PHOENIX-6148 > URL: https://issues.apache.org/jira/browse/PHOENIX-6148 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6148.4.x.add.patch, PHOENIX-6148.4.x.patch, > PHOENIX-6148.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): > Syntax error. Encountered ".0" at line 1, column 28. > 585 at > org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) > 586 at > org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) > Seeing this when parsing create view statement generated from > SchemaExtractionTool as below: > CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
[ https://issues.apache.org/jira/browse/PHOENIX-6148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6148: Fix Version/s: (was: 4.17.0) (was: 4.16.1) 4.16.0 > [SchemaExtractionTool]DDL parsing exception in Phoenix in view name > --- > > Key: PHOENIX-6148 > URL: https://issues.apache.org/jira/browse/PHOENIX-6148 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6148.4.x.patch, PHOENIX-6148.patch > > Time Spent: 3h 50m > Remaining Estimate: 0h > > 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): > Syntax error. Encountered ".0" at line 1, column 28. > 585 at > org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) > 586 at > org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) > Seeing this when parsing create view statement generated from > SchemaExtractionTool as below: > CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6271) Effective DDL generated by SchemaExtractionTool should maintain the order of PK and other columns
Swaroopa Kadam created PHOENIX-6271: --- Summary: Effective DDL generated by SchemaExtractionTool should maintain the order of PK and other columns Key: PHOENIX-6271 URL: https://issues.apache.org/jira/browse/PHOENIX-6271 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 4.16.0 SchemaExtractionTool is used to generate effective DDL which can be then compared with the DDL on the cluster to perform schema monitoring. This won't affect the monitoring part but would be good to have the PR order in place so that effective DDL can be used for creating the entity for the first time in a new environment. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6239) NullPointerException when index table does not use COLUMN_ENCODED_BYTES
[ https://issues.apache.org/jira/browse/PHOENIX-6239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6239: Fix Version/s: 4.16.0 > NullPointerException when index table does not use COLUMN_ENCODED_BYTES > --- > > Key: PHOENIX-6239 > URL: https://issues.apache.org/jira/browse/PHOENIX-6239 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > Attachments: Phoenix-6239.4.x.patch > > > MetaDataClient#getPTablePColumnHashMapForCascade() can throw NPE as > cqCounterToUse.getNextQualifier(familyName) can be nullable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6239) NullPointerException when index table does not use COLUMN_ENCODED_BYTES
Swaroopa Kadam created PHOENIX-6239: --- Summary: NullPointerException when index table does not use COLUMN_ENCODED_BYTES Key: PHOENIX-6239 URL: https://issues.apache.org/jira/browse/PHOENIX-6239 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam MetaDataClient#getPTablePColumnHashMapForCascade() can throw NPE as cqCounterToUse.getNextQualifier(familyName) can be nullable. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6202) New column in index gets added as PK with CASCADE INDEX
[ https://issues.apache.org/jira/browse/PHOENIX-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6202: Description: {code:java} @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName(); String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL); String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) "; conn.createStatement().execute(indexDDL); String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); dumpTable(idxName); String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable); String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')"; conn.createStatement().executeUpdate(upsert2); dumpTable(idxName); String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); } } public static void dumpTable(Table hTable) { try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result != null; result = scanner.next()) { for (Cell cell : result.rawCells()) { String cellString = cell.toString(); System.out.println(cellString + " value : " + Bytes.toStringBinary(CellUtil.cloneValue(cell))); } } } catch (Exception e) { //ignore } } {code} output: PK1/0:\x00\x00\x00\x00/1603217119002/Put/vlen=1/seqid=0 ** value : \x01 PK1/0:\x80\x0B/1603217119002/Put/vlen=2/seqid=0 ** value : V1 PK1/0:\x80\x0C/1603217119002/Put/vlen=15/seqid=0 ** value : V2 PK2\x00V3/0:\x00\x00\x00\x00/1603217125595/Put/vlen=1/seqid=0 ** value : \x01 PK2\x00V3/0:\x80\x0B/1603217125595/Put/vlen=2/seqid=0 ** value : V1 PK2\x00V3/0:\x80\x0C/1603217125595/Put/vlen=15/seqid=0 ** value : V2 was: {code:java} @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName(); String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL); String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) ";//IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, COLUMN_ENCODED_BYTES=2"; conn.createStatement().execute(indexDDL); String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); conn.commit(); dumpTable(idxName); String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable); String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')"; conn.createStatement().executeUpdate(upsert2); conn.commit(); dumpTable(idxName); String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); String actualExplainPlan = QueryUtil.getExplainPlan(rs); assertTrue(actualExplainPlan.contains(idxName)); rs = conn.createStatement().executeQuery(selectFromIndex); assertTrue(rs.next()); assertEquals("PK1", rs.getString(1)); assertTrue(rs.next()); assertEquals("PK2", rs.getString(1)); assertEquals("V3", rs.getString(2)); } } public static void dumpTable(Table hTable) { try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result !=
[jira] [Updated] (PHOENIX-6202) New column in index gets added as PK with CASCADE INDEX
[ https://issues.apache.org/jira/browse/PHOENIX-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6202: Description: {code:java} @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName(); String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL); String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) "; conn.createStatement().execute(indexDDL); String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); dumpTable(idxName); String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable); String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')"; conn.createStatement().executeUpdate(upsert2); dumpTable(idxName); String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); } } public static void dumpTable(Table hTable) { try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result != null; result = scanner.next()) { for (Cell cell : result.rawCells()) { String cellString = cell.toString(); System.out.println(cellString + " value : " + Bytes.toStringBinary(CellUtil.cloneValue(cell))); } } } catch (Exception e) { //ignore } } {code} output: PK1/0:\x00\x00\x00\x00/1603217119002/Put/vlen=1/seqid=0 ** value : \x01 PK1/0:\x80\x0B/1603217119002/Put/vlen=2/seqid=0 ** value : V1 PK1/0:\x80\x0C/1603217119002/Put/vlen=15/seqid=0 ** value : V2 PK2\x00V3/0:\x00\x00\x00\x00/1603217125595/Put/vlen=1/seqid=0 ** value : \x01 PK2\x00V3/0:\x80\x0B/1603217125595/Put/vlen=2/seqid=0 ** value : V1 PK2\x00V3/0:\x80\x0C/1603217125595/Put/vlen=15/seqid=0 ** value : V2 was: {code:java} @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName(); String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL); String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) "; conn.createStatement().execute(indexDDL); String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); dumpTable(idxName); String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable); String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')"; conn.createStatement().executeUpdate(upsert2); dumpTable(idxName); String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); } } public static void dumpTable(Table hTable) { try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result != null; result = scanner.next()) { for (Cell cell : result.rawCells()) { String cellString = cell.toString(); System.out.println(cellString + " value : " + Bytes.toStringBinary(CellUtil.cloneValue(cell))); } } } catch (Exception e) { //ignore } } {code} output: PK1/0:\x00\x00\x00\x00/1603217119002/Put/vlen=1/seqid=0 ** value : \x01 PK1/0:\x80\x0B/1603217119002/Put/vlen=2/seqid=0 ** value : V1
[jira] [Updated] (PHOENIX-6202) New column in index gets added as PK with CASCADE INDEX
[ https://issues.apache.org/jira/browse/PHOENIX-6202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-6202: Description: {code:java} @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName(); String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL); String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) ";//IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, COLUMN_ENCODED_BYTES=2"; conn.createStatement().execute(indexDDL); String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); conn.commit(); dumpTable(idxName); String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable); String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')"; conn.createStatement().executeUpdate(upsert2); conn.commit(); dumpTable(idxName); String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); String actualExplainPlan = QueryUtil.getExplainPlan(rs); assertTrue(actualExplainPlan.contains(idxName)); rs = conn.createStatement().executeQuery(selectFromIndex); assertTrue(rs.next()); assertEquals("PK1", rs.getString(1)); assertTrue(rs.next()); assertEquals("PK2", rs.getString(1)); assertEquals("V3", rs.getString(2)); } } public static void dumpTable(Table hTable) { try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result != null; result = scanner.next()) { for (Cell cell : result.rawCells()) { String cellString = cell.toString(); System.out.println(cellString + " value : " + Bytes.toStringBinary(CellUtil.cloneValue(cell))); } } } catch (Exception e) { //ignore } } {code} output: PK1/0:\x00\x00\x00\x00/1603217119002/Put/vlen=1/seqid=0 ** value : \x01 PK1/0:\x80\x0B/1603217119002/Put/vlen=2/seqid=0 ** value : V1 PK1/0:\x80\x0C/1603217119002/Put/vlen=15/seqid=0 ** value : V2 PK2\x00V3/0:\x00\x00\x00\x00/1603217125595/Put/vlen=1/seqid=0 ** value : \x01 PK2\x00V3/0:\x80\x0B/1603217125595/Put/vlen=2/seqid=0 ** value : V1 PK2\x00V3/0:\x80\x0C/1603217125595/Put/vlen=15/seqid=0 ** value : V2 was: @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true);String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName();String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL);String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) ";//IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, COLUMN_ENCODED_BYTES=2"; conn.createStatement().execute(indexDDL);String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); conn.commit();dumpTable(idxName);String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable);String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')";conn.createStatement().executeUpdate(upsert2); conn.commit();dumpTable(idxName);String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); String actualExplainPlan = QueryUtil.getExplainPlan
[jira] [Created] (PHOENIX-6202) New column in index gets added as PK with CASCADE INDEX
Swaroopa Kadam created PHOENIX-6202: --- Summary: New column in index gets added as PK with CASCADE INDEX Key: PHOENIX-6202 URL: https://issues.apache.org/jira/browse/PHOENIX-6202 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam @Test public void testGlobalAddColumns() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true);String tableName = "TBL_" + generateUniqueName(); String idxName = "IND_" + generateUniqueName();String baseTableDDL = "CREATE TABLE " + tableName + " (PK1 VARCHAR NOT NULL, V1 VARCHAR, V2 CHAR(15) CONSTRAINT NAME_PK PRIMARY KEY(PK1)) "; conn.createStatement().execute(baseTableDDL);String indexDDL = "CREATE INDEX " + idxName + " ON " + tableName + " (PK1) include (V1, V2) ";//IMMUTABLE_STORAGE_SCHEME=SINGLE_CELL_ARRAY_WITH_OFFSETS, COLUMN_ENCODED_BYTES=2"; conn.createStatement().execute(indexDDL);String upsert = "UPSERT INTO " + tableName + " (PK1, V1, V2) VALUES ('PK1', 'V1', 'V2')"; conn.createStatement().executeUpdate(upsert); conn.commit();dumpTable(idxName);String alterTable = "ALTER TABLE " + tableName + " ADD V3 VARCHAR CASCADE INDEX ALL"; conn.createStatement().execute(alterTable);String upsert2 = "UPSERT INTO " + tableName + " (PK1, V1, V2,V3) VALUES ('PK2', 'V1', 'V2', 'V3')";conn.createStatement().executeUpdate(upsert2); conn.commit();dumpTable(idxName);String selectFromIndex = "SELECT PK1, V3, V1, V2 FROM " + tableName + " where V1='V1' AND V2='V2'"; ResultSet rs = conn.createStatement().executeQuery("EXPLAIN " + selectFromIndex); String actualExplainPlan = QueryUtil.getExplainPlan(rs); assertTrue(actualExplainPlan.contains(idxName));rs = conn.createStatement().executeQuery(selectFromIndex); assertTrue(rs.next()); assertEquals("PK1", rs.getString(1)); assertTrue(rs.next()); assertEquals("PK2", rs.getString(1)); assertEquals("V3", rs.getString(2)); } } public static void dumpTable(Table hTable) \{ try { Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(); System.out.println("Table Name : " + hTable.getName().getNameAsString()); ResultScanner scanner = hTable.getScanner(scan); for (Result result = scanner.next(); result != null; result = scanner.next()) { for (Cell cell : result.rawCells()) { String cellString = cell.toString(); System.out.println(cellString + " value : " + Bytes.toStringBinary(CellUtil.cloneValue(cell))); } } } catch (Exception e) \{ //ignore } } output: PK1/0:\x00\x00\x00\x00/1603217119002/Put/vlen=1/seqid=0 ** value : \x01 PK1/0:\x80\x0B/1603217119002/Put/vlen=2/seqid=0 ** value : V1 PK1/0:\x80\x0C/1603217119002/Put/vlen=15/seqid=0 ** value : V2 PK2\x00V3/0:\x00\x00\x00\x00/1603217125595/Put/vlen=1/seqid=0 ** value : \x01 PK2\x00V3/0:\x80\x0B/1603217125595/Put/vlen=2/seqid=0 ** value : V1 PK2\x00V3/0:\x80\x0C/1603217125595/Put/vlen=15/seqid=0 ** value : V2 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6149) [SchemaExtractionTool] DDL parsing exception in Phoenix - "ROW"
Swaroopa Kadam created PHOENIX-6149: --- Summary: [SchemaExtractionTool] DDL parsing exception in Phoenix - "ROW" Key: PHOENIX-6149 URL: https://issues.apache.org/jira/browse/PHOENIX-6149 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam 79 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered "ROW" at line 1, column 96. 80 at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) 81 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) Get above exception when parsing below DDL generated by SchemaExtractionTool CREATE TABLE TEST_TABLE(COLUMN_1 VARCHAR NOT NULL PRIMARY KEY, COLUMN_2 VARCHAR) BLOOMFILTER=ROW, DATA_BLOCK_ENCODING=NONE, IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6148) [SchemaExtractionTool]DDL parsing exception in Phoenix in view name
Swaroopa Kadam created PHOENIX-6148: --- Summary: [SchemaExtractionTool]DDL parsing exception in Phoenix in view name Key: PHOENIX-6148 URL: https://issues.apache.org/jira/browse/PHOENIX-6148 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam 584 org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax error. Encountered ".0" at line 1, column 28. 585 at org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33) 586 at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111) Seeing this when parsing create view statement generated from SchemaExtractionTool as below: CREAT VIEW TEST.04KA AS SELECT * FROM TEST.TABLE_NAME; -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-6080) Add a check to Index Rebuild jobs to check region closing before every inner batch
Swaroopa Kadam created PHOENIX-6080: --- Summary: Add a check to Index Rebuild jobs to check region closing before every inner batch Key: PHOENIX-6080 URL: https://issues.apache.org/jira/browse/PHOENIX-6080 Project: Phoenix Issue Type: Bug Affects Versions: 4.15.0 Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.16.0 Add a check to Index Rebuild jobs to check if the region is closing before every inner batch to allow the acquire write lock to Close -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5946) Implement SchemaExtractionTool utility to get effective DDL from cluster
[ https://issues.apache.org/jira/browse/PHOENIX-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5946: Fix Version/s: 4.16.0 5.1.0 > Implement SchemaExtractionTool utility to get effective DDL from cluster > > > Key: PHOENIX-5946 > URL: https://issues.apache.org/jira/browse/PHOENIX-5946 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-5946.4.x.patch, PHOENIX-5946.master.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The utility will take table/view/index and schema as a parameter to generate > effective DDL for those entities in a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-5948) Add initial support to read table and schema from CommandLine
[ https://issues.apache.org/jira/browse/PHOENIX-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-5948. - Resolution: Implemented > Add initial support to read table and schema from CommandLine > - > > Key: PHOENIX-5948 > URL: https://issues.apache.org/jira/browse/PHOENIX-5948 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5948) Add initial support to read table and schema from CommandLine
[ https://issues.apache.org/jira/browse/PHOENIX-5948?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5948: --- Assignee: Swaroopa Kadam > Add initial support to read table and schema from CommandLine > - > > Key: PHOENIX-5948 > URL: https://issues.apache.org/jira/browse/PHOENIX-5948 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-5947) Add Unit tests and Integration tests for utility
[ https://issues.apache.org/jira/browse/PHOENIX-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-5947. - Resolution: Implemented > Add Unit tests and Integration tests for utility > > > Key: PHOENIX-5947 > URL: https://issues.apache.org/jira/browse/PHOENIX-5947 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam >Assignee: Qinrui Li >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (PHOENIX-5949) Add support for "all" keyword in the utility
[ https://issues.apache.org/jira/browse/PHOENIX-5949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam resolved PHOENIX-5949. - Resolution: Later > Add support for "all" keyword in the utility > > > Key: PHOENIX-5949 > URL: https://issues.apache.org/jira/browse/PHOENIX-5949 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam >Priority: Major > > when passed an -all keyword to utility, it will query the sys cat and emit > the effective DDL for all views, indexes, and tables on the cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5946) Implement SchemaExtractionTool utility to get effective DDL from cluster
[ https://issues.apache.org/jira/browse/PHOENIX-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5946: Attachment: PHOENIX-5946.master.patch > Implement SchemaExtractionTool utility to get effective DDL from cluster > > > Key: PHOENIX-5946 > URL: https://issues.apache.org/jira/browse/PHOENIX-5946 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5946.4.x.patch, PHOENIX-5946.master.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The utility will take table/view/index and schema as a parameter to generate > effective DDL for those entities in a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5947) Add Unit tests and Integration tests for utility
[ https://issues.apache.org/jira/browse/PHOENIX-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5947: --- Assignee: Swaroopa Kadam > Add Unit tests and Integration tests for utility > > > Key: PHOENIX-5947 > URL: https://issues.apache.org/jira/browse/PHOENIX-5947 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5947) Add Unit tests and Integration tests for utility
[ https://issues.apache.org/jira/browse/PHOENIX-5947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5947: --- Assignee: (was: Swaroopa Kadam) > Add Unit tests and Integration tests for utility > > > Key: PHOENIX-5947 > URL: https://issues.apache.org/jira/browse/PHOENIX-5947 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5989) Time based Incremental verification does not work for view with non-PK column in where clause
Swaroopa Kadam created PHOENIX-5989: --- Summary: Time based Incremental verification does not work for view with non-PK column in where clause Key: PHOENIX-5989 URL: https://issues.apache.org/jira/browse/PHOENIX-5989 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam For Views with a non-pk column in where clause, when only delete marker falls between the incremental time range for verification, it would not be read during scan even though there is raw and all version set for the scan. reason: This is because, in order to qualify that row for the view, there is no column in either pk or cell value (which is only a delete marker and pk does not have a column in the where clause) -- This message was sent by Atlassian Jira (v8.3.4#803005)
Re: Master branch does not compile
yes, that’s a good idea. On Tue, Jun 16, 2020 at 9:29 AM Josh Elser wrote: > Sounds like we should try to update precommit to at least compile > against _a version_ in each line 2.1/2.2/2.3 for master. Thoughts? > > On 6/16/20 11:57 AM, swaroopa kadam wrote: > > Thank you for the replies everyone! > > > > It puts me at ease by knowing the issue has been identified and being > > fixed. > > > > Thanks! > > > > On Tue, Jun 16, 2020 at 4:11 AM rajeshb...@apache.org < > > chrajeshbab...@gmail.com> wrote: > > > >> I am on the PHOENIX-5905 compilation issue and will fix it today. > >> > >> On Tue, Jun 16, 2020 at 3:07 PM cheng...@apache.org < > cheng...@apache.org> > >> wrote: > >> > >>> > >>> > >>> > >>> PHOENIX-5905 caused the master branch compile broken, because > >>> org.apache.hadoop.hbase.security.access.GetUserPermissionsRequest is > only > >>> available in hbase 2.2.x, > >>> so when the hbase.profile=2.0 or hbase.profile=2.1, the compiler > boken, > >>> Should we revert PHOENIX-5905 for the moment? > >>> > >>> > >>> The error messages are : > >>> > >>> > >>> [ERROR] COMPILATION ERROR : > >>> [INFO] - > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[39,47] > >>> cannot find symbol > >>>symbol: class GetUserPermissionsRequest > >>>location: package org.apache.hadoop.hbase.security.access > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1452,48] > >>> cannot find symbol > >>>symbol: method hasUserName() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1452,72] > >>> cannot find symbol > >>>symbol: method getUserName() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1458,32] > >>> cannot find symbol > >>>symbol: method hasColumnFamily() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1458,60] > >>> cannot find symbol > >>>symbol: method getColumnFamily() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1460,32] > >>> cannot find symbol > >>>symbol: method hasColumnQualifier() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > >>> [ERROR] < > >>> > >> > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >>> :[1460,63] > >>> cannot find symbol > >>>symbol: method getColumnQualifier() > >>>location: variable request of type > >>> > >> > org.apache.hadoop.hbase.protobu
Re: Re: Master branch does not compile
Thank you for the replies everyone! It puts me at ease by knowing the issue has been identified and being fixed. Thanks! On Tue, Jun 16, 2020 at 4:11 AM rajeshb...@apache.org < chrajeshbab...@gmail.com> wrote: > I am on the PHOENIX-5905 compilation issue and will fix it today. > > On Tue, Jun 16, 2020 at 3:07 PM cheng...@apache.org > wrote: > > > > > > > > > PHOENIX-5905 caused the master branch compile broken, because > > org.apache.hadoop.hbase.security.access.GetUserPermissionsRequest is only > > available in hbase 2.2.x, > > so when the hbase.profile=2.0 or hbase.profile=2.1, the compiler boken, > > Should we revert PHOENIX-5905 for the moment? > > > > > > The error messages are : > > > > > > [ERROR] COMPILATION ERROR : > > [INFO] - > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[39,47] > > cannot find symbol > > symbol: class GetUserPermissionsRequest > > location: package org.apache.hadoop.hbase.security.access > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1452,48] > > cannot find symbol > > symbol: method hasUserName() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1452,72] > > cannot find symbol > > symbol: method getUserName() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1458,32] > > cannot find symbol > > symbol: method hasColumnFamily() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1458,60] > > cannot find symbol > > symbol: method getColumnFamily() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1460,32] > > cannot find symbol > > symbol: method hasColumnQualifier() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1460,63] > > cannot find symbol > > symbol: method getColumnQualifier() > > location: variable request of type > > > org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1461,17] > > cannot find symbol > > symbol: class GetUserPermissionsRequest > > location: class > > org.apache.phoenix.end2end.BasePermissionsIT.CustomAccessController > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1463,49] > > cannot find symbol > > symbol: variable GetUserPermissionsRequest > > location: class > > org.apache.phoenix.end2end.BasePermissionsIT.CustomAccessController > > [ERROR] < > > > https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java > >:[1467,29] > > cannot find symbol > > symbol: variable GetUserPermissionsRequest > > location: class org.apa
Re: Master branch does not compile
Thank you for the response, Andrew and Geoffrey. Below is the error message I see: [ERROR] Failed to execute goal org.apache.maven.plugins:maven-dependency-plugin:3.1.1:analyze-only (enforce-dependencies) on project phoenix-core: Dependency problems found -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :phoenix-core On Mon, Jun 15, 2020 at 10:45 AM Geoffrey Jacoby wrote: > git checkout master, then git pull, then mvn clean package -DskipTests > passes for me, and I double-checked with git status that I don't have > anything "extra" in my local environment. > > Geoffrey > > On Mon, Jun 15, 2020 at 10:29 AM Andrew Purtell > wrote: > > > Apache mailing list software stripps embedded images. Please post text. > > > > > > On Mon, Jun 15, 2020 at 10:24 AM swaroopa kadam < > > swaroopa.kada...@gmail.com> > > wrote: > > > > > Hi, > > > > > > I am trying to compile the master branch of phoenix and I get the > > > following error message. > > > What has changed recently? Do I need to make additional changes? > > > > > > Thanks. > > > > > > [image: Screen Shot 2020-06-12 at 6.02.25 PM.png] > > > > > > -- > > > > > > > > > Swaroopa Kadam > > > [image: https://]about.me/swaroopa_kadam > > > < > > > https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api > > > > > > > > > > > > -- > > Best regards, > > Andrew > > > > Words like orphans lost among the crosstalk, meaning torn from truth's > > decrepit hands > >- A23, Crosstalk > > > -- Swaroopa Kadam [image: https://]about.me/swaroopa_kadam <https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api>
Master branch does not compile
Hi, I am trying to compile the master branch of phoenix and I get the following error message. What has changed recently? Do I need to make additional changes? Thanks. [image: Screen Shot 2020-06-12 at 6.02.25 PM.png] -- Swaroopa Kadam [image: https://]about.me/swaroopa_kadam <https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api>
[jira] [Assigned] (PHOENIX-5946) Implement SchemaExtractionTool utility to get effective DDL from cluster
[ https://issues.apache.org/jira/browse/PHOENIX-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5946: --- Assignee: (was: Swaroopa Kadam) > Implement SchemaExtractionTool utility to get effective DDL from cluster > > > Key: PHOENIX-5946 > URL: https://issues.apache.org/jira/browse/PHOENIX-5946 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam >Priority: Major > > The utility will take table/view/index and schema as a parameter to generate > effective DDL for those entities in a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5946) Implement SchemaExtractionTool utility to get effective DDL from cluster
[ https://issues.apache.org/jira/browse/PHOENIX-5946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5946: --- Assignee: Swaroopa Kadam > Implement SchemaExtractionTool utility to get effective DDL from cluster > > > Key: PHOENIX-5946 > URL: https://issues.apache.org/jira/browse/PHOENIX-5946 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > > The utility will take table/view/index and schema as a parameter to generate > effective DDL for those entities in a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5949) Add support for "all" keyword in the utility
Swaroopa Kadam created PHOENIX-5949: --- Summary: Add support for "all" keyword in the utility Key: PHOENIX-5949 URL: https://issues.apache.org/jira/browse/PHOENIX-5949 Project: Phoenix Issue Type: Sub-task Reporter: Swaroopa Kadam when passed an -all keyword to utility, it will query the sys cat and emit the effective DDL for all views, indexes, and tables on the cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5948) Add initial support to read table and schema from CommandLine
Swaroopa Kadam created PHOENIX-5948: --- Summary: Add initial support to read table and schema from CommandLine Key: PHOENIX-5948 URL: https://issues.apache.org/jira/browse/PHOENIX-5948 Project: Phoenix Issue Type: Sub-task Reporter: Swaroopa Kadam -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5947) Add Unit tests and Integration tests for utility
Swaroopa Kadam created PHOENIX-5947: --- Summary: Add Unit tests and Integration tests for utility Key: PHOENIX-5947 URL: https://issues.apache.org/jira/browse/PHOENIX-5947 Project: Phoenix Issue Type: Sub-task Reporter: Swaroopa Kadam -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5946) Implement SchemaExtractionTool utility to get effective DDL from cluster
Swaroopa Kadam created PHOENIX-5946: --- Summary: Implement SchemaExtractionTool utility to get effective DDL from cluster Key: PHOENIX-5946 URL: https://issues.apache.org/jira/browse/PHOENIX-5946 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam The utility will take table/view/index and schema as a parameter to generate effective DDL for those entities in a cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5783) Implement starttime in IndexTool for rebuild and verification
[ https://issues.apache.org/jira/browse/PHOENIX-5783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5783: Attachment: (was: PHOENIX-5783.4.x.v1.patch) > Implement starttime in IndexTool for rebuild and verification > - > > Key: PHOENIX-5783 > URL: https://issues.apache.org/jira/browse/PHOENIX-5783 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5783.4.x.001.patch > > Time Spent: 10m > Remaining Estimate: 0h > > IndexTool's inline verification and rebuild should be able to perform the > logic for a specified time range given by starttime parameters. > > This feature is only for indexes on non-transactional tables > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-4286) Create EXPORT SCHEMA command
[ https://issues.apache.org/jira/browse/PHOENIX-4286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-4286: --- Assignee: Swaroopa Kadam (was: Rushabh Shah) > Create EXPORT SCHEMA command > > > Key: PHOENIX-4286 > URL: https://issues.apache.org/jira/browse/PHOENIX-4286 > Project: Phoenix > Issue Type: New Feature >Reporter: Geoffrey Jacoby > Assignee: Swaroopa Kadam >Priority: Major > > Phoenix takes in DDL statements and uses it to create metadata in the various > SYSTEM tables. There's currently no supported way to go in the opposite > direction. > This is particularly important in migration use cases. If schemas between two > clusters are already synchronized, migration of data is _relatively_ > straightforward using either Phoenix or HBase's MapReduce integration. > Syncing metadata can much more complicated, particularly if only a subset > needs to be migrated. For example, an operator migrating a single tenant from > one cluster to another would want to also migrate any views or sequences > owned by that tenant. > This can be accomplished by treating SYSTEM tables as data tables and > migrating subsets of them but implementations will be relying on brittle > low-level implementation details that can and do change. > Given an EXPORT command, this could be done at a much higher level -- you > simply select the DDL statements from the source cluster you need, and then > run them on the target cluster. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5054) "look up" the `CREATE TABLE` statement used for a table
[ https://issues.apache.org/jira/browse/PHOENIX-5054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5054: --- Assignee: Tanuj Khurana > "look up" the `CREATE TABLE` statement used for a table > --- > > Key: PHOENIX-5054 > URL: https://issues.apache.org/jira/browse/PHOENIX-5054 > Project: Phoenix > Issue Type: New Feature >Reporter: Josh Elser >Assignee: Tanuj Khurana >Priority: Major > Labels: phoenix-hardening > > This is a super common problem we run into: > # User files report/complaint > # We ask for DDLs for table and any indexes > # We receive the output of `describe ` from the HBase shell > Presumably, we have all of the necessary information inside of > {{SYSTEM.CATALOG}}, we could recreate the {{CREATE TABLE}} statement for a > table, no? I think it would be super helpful to, at a given point in time, > obtain the {{CREATE TABLE}} statement to recreate a table as it currently > exists. > Split points might be the only thing we can't explicitly do via Phoenix, but > that's pretty minor compared to everything else. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5841) When data columns get TTLed, we need inline index validation to publish a metric for this
[ https://issues.apache.org/jira/browse/PHOENIX-5841?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5841: --- Assignee: Gokcen Iskender > When data columns get TTLed, we need inline index validation to publish a > metric for this > - > > Key: PHOENIX-5841 > URL: https://issues.apache.org/jira/browse/PHOENIX-5841 > Project: Phoenix > Issue Type: Improvement >Reporter: Gokcen Iskender >Assignee: Gokcen Iskender >Priority: Major > Attachments: PHOENIX-5841.4.x.001.patch, PHOENIX-5841.4.x.002.patch, > PHOENIX-5841.4.x.003.patch, PHOENIX-5841.master.001.patch, > PHOENIX-5841.master.002.patch, PHOENIX-5841.master.003.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > We do index writes as full row writes. This means all columns keep get > re-written to index and they have a current timestamp. > However, if there is a column that did not get updated for a long time (like > Created_By type of columns that don't change) index inline validation marks > these as "Index has extra columns". We need to publish an extra metric to > distinguish these cases since they are expected to be not matching rows. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5890) Port PHOENIX-5799 to master
[ https://issues.apache.org/jira/browse/PHOENIX-5890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5890: --- Assignee: Swaroopa Kadam (was: Geoffrey Jacoby) > Port PHOENIX-5799 to master > --- > > Key: PHOENIX-5890 > URL: https://issues.apache.org/jira/browse/PHOENIX-5890 > Project: Phoenix > Issue Type: Improvement >Reporter: Geoffrey Jacoby > Assignee: Swaroopa Kadam >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
[ https://issues.apache.org/jira/browse/PHOENIX-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5896: Attachment: (was: PHOENIX-5896.4.x.add1.patch) > Implement incremental rebuild along the failed regions in IndexTool > --- > > Key: PHOENIX-5896 > URL: https://issues.apache.org/jira/browse/PHOENIX-5896 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5896.4.x.add0.patch, PHOENIX-5896.4.x.v1.patch, > PHOENIX-5896.4.x.v2.patch, PHOENIX-5896.4.x.v3.patch, > PHOENIX-5896.4.x.v4.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > As we run the index tool on indexes to be rebuilt after the upgrade, it > spends some time in rescanning successful regions from the last rebuild. We > want to make the index tool a little smarter to not rebuild rows from the > regions that were found in the PIT_result table from the last rebuild. > PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
[ https://issues.apache.org/jira/browse/PHOENIX-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5896: Attachment: (was: PHOENIX-5896.4.x.add.patch) > Implement incremental rebuild along the failed regions in IndexTool > --- > > Key: PHOENIX-5896 > URL: https://issues.apache.org/jira/browse/PHOENIX-5896 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5896.4.x.add1.patch, PHOENIX-5896.4.x.v1.patch, > PHOENIX-5896.4.x.v2.patch, PHOENIX-5896.4.x.v3.patch, > PHOENIX-5896.4.x.v4.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > As we run the index tool on indexes to be rebuilt after the upgrade, it > spends some time in rescanning successful regions from the last rebuild. We > want to make the index tool a little smarter to not rebuild rows from the > regions that were found in the PIT_result table from the last rebuild. > PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
[ https://issues.apache.org/jira/browse/PHOENIX-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5896: Attachment: (was: PHOENIX-5896.4.x.add.patch) > Implement incremental rebuild along the failed regions in IndexTool > --- > > Key: PHOENIX-5896 > URL: https://issues.apache.org/jira/browse/PHOENIX-5896 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5896.4.x.v1.patch, PHOENIX-5896.4.x.v2.patch, > PHOENIX-5896.4.x.v3.patch, PHOENIX-5896.4.x.v4.patch > > Time Spent: 2.5h > Remaining Estimate: 0h > > As we run the index tool on indexes to be rebuilt after the upgrade, it > spends some time in rescanning successful regions from the last rebuild. We > want to make the index tool a little smarter to not rebuild rows from the > regions that were found in the PIT_result table from the last rebuild. > PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
[ https://issues.apache.org/jira/browse/PHOENIX-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5896: Attachment: (was: PHOENIX-5896.4.x.v1.patch) > Implement incremental rebuild along the failed regions in IndexTool > --- > > Key: PHOENIX-5896 > URL: https://issues.apache.org/jira/browse/PHOENIX-5896 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5896.4.x.v1.patch, PHOENIX-5896.4.x.v2.patch, > PHOENIX-5896.4.x.v3.patch, PHOENIX-5896.4.x.v4.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > As we run the index tool on indexes to be rebuilt after the upgrade, it > spends some time in rescanning successful regions from the last rebuild. We > want to make the index tool a little smarter to not rebuild rows from the > regions that were found in the PIT_result table from the last rebuild. > PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
[ https://issues.apache.org/jira/browse/PHOENIX-5896?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5896: Attachment: (was: PHOENIX-5896.4.x.v1.patch) > Implement incremental rebuild along the failed regions in IndexTool > --- > > Key: PHOENIX-5896 > URL: https://issues.apache.org/jira/browse/PHOENIX-5896 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5896.4.x.v2.patch, PHOENIX-5896.4.x.v3.patch, > PHOENIX-5896.4.x.v4.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > As we run the index tool on indexes to be rebuilt after the upgrade, it > spends some time in rescanning successful regions from the last rebuild. We > want to make the index tool a little smarter to not rebuild rows from the > regions that were found in the PIT_result table from the last rebuild. > PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5896) Implement incremental rebuild along the failed regions in IndexTool
Swaroopa Kadam created PHOENIX-5896: --- Summary: Implement incremental rebuild along the failed regions in IndexTool Key: PHOENIX-5896 URL: https://issues.apache.org/jira/browse/PHOENIX-5896 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam As we run the index tool on indexes to be rebuilt after the upgrade, it spends some time in rescanning successful regions from the last rebuild. We want to make the index tool a little smarter to not rebuild rows from the regions that were found in the PIT_result table from the last rebuild. PIT_result logs region info if it was successfully rebuilt in the last run. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5256) Remove queryserver related scripts/files as the former has its own repo
[ https://issues.apache.org/jira/browse/PHOENIX-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5256: Priority: Trivial (was: Blocker) > Remove queryserver related scripts/files as the former has its own repo > --- > > Key: PHOENIX-5256 > URL: https://issues.apache.org/jira/browse/PHOENIX-5256 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.0.0, 4.14.2 > Reporter: Swaroopa Kadam >Priority: Trivial > Labels: newbie > Fix For: 4.15.1, 5.1.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5256) Remove queryserver related scripts/files as the former has its own repo
[ https://issues.apache.org/jira/browse/PHOENIX-5256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5256: --- Assignee: (was: Swaroopa Kadam) > Remove queryserver related scripts/files as the former has its own repo > --- > > Key: PHOENIX-5256 > URL: https://issues.apache.org/jira/browse/PHOENIX-5256 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.0.0, 4.14.2 > Reporter: Swaroopa Kadam >Priority: Blocker > Labels: newbie > Fix For: 4.15.1, 5.1.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5874) IndexTool does not set TTL on its log tables correctly
[ https://issues.apache.org/jira/browse/PHOENIX-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5874: Attachment: PHOENIX-5874.4.x.v2.patch > IndexTool does not set TTL on its log tables correctly > -- > > Key: PHOENIX-5874 > URL: https://issues.apache.org/jira/browse/PHOENIX-5874 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0, 4.14.3 >Reporter: Kadir OZDEMIR > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5874.4.x.v1.patch, PHOENIX-5874.4.x.v2.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > IndexTool does not use the correct API to set 7 day TTL on its log tables. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5874) IndexTool does not set TTL on its log tables correctly
[ https://issues.apache.org/jira/browse/PHOENIX-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5874: --- Assignee: Swaroopa Kadam > IndexTool does not set TTL on its log tables correctly > -- > > Key: PHOENIX-5874 > URL: https://issues.apache.org/jira/browse/PHOENIX-5874 > Project: Phoenix > Issue Type: Bug >Affects Versions: 5.0.0, 4.14.3 >Reporter: Kadir OZDEMIR > Assignee: Swaroopa Kadam >Priority: Major > > IndexTool does not use the correct API to set 7 day TTL on its log tables. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5870) IndexRegionObserver should retry before mappers in case of rebuild
[ https://issues.apache.org/jira/browse/PHOENIX-5870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5870: Description: We noticed that in the case of a synchronous rebuild, there are no retries for rebuilding secondary indexes. Whereas, retries by mappers in case of Asynchronous rebuild are just not enough to get the rebuild job done. (was: We noticed that in case synchronous rebuild there are not enough retries for rebuilding secondary indexes. Whereas, retries by mappers in case of Asynchronous rebuild are just not enough to get the rebuild job done. ) > IndexRegionObserver should retry before mappers in case of rebuild > -- > > Key: PHOENIX-5870 > URL: https://issues.apache.org/jira/browse/PHOENIX-5870 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > > We noticed that in the case of a synchronous rebuild, there are no retries > for rebuilding secondary indexes. Whereas, retries by mappers in case of > Asynchronous rebuild are just not enough to get the rebuild job done. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5870) IndexRegionObserver should retry before mappers in case of rebuild
Swaroopa Kadam created PHOENIX-5870: --- Summary: IndexRegionObserver should retry before mappers in case of rebuild Key: PHOENIX-5870 URL: https://issues.apache.org/jira/browse/PHOENIX-5870 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam We noticed that in case synchronous rebuild there are not enough retries for rebuilding secondary indexes. Whereas, retries by mappers in case of Asynchronous rebuild are just not enough to get the rebuild job done. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5804) Implement strong verification with -v ONLY option for old design of secondary indexes
[ https://issues.apache.org/jira/browse/PHOENIX-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5804: Attachment: (was: PHOENIX-5804.4.x.v4.patch) > Implement strong verification with -v ONLY option for old design of secondary > indexes > - > > Key: PHOENIX-5804 > URL: https://issues.apache.org/jira/browse/PHOENIX-5804 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5804.4.x.v1.patch, PHOENIX-5804.4.x.v3.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, with -v ONLY option we get weak verification i.e. we find out if > index row is present or not. It does not provide information on if the values > have mismatch when executed with old design. > We attempt to provide scrutiny like implementation but way faster. The > verification will be done only one the latest version of the row. > This will help us in quantifying the success of new self-consistent secondary > indexes design. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5840) IndexTool inline verification should not fail with -v ONLY option
Swaroopa Kadam created PHOENIX-5840: --- Summary: IndexTool inline verification should not fail with -v ONLY option Key: PHOENIX-5840 URL: https://issues.apache.org/jira/browse/PHOENIX-5840 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam IndexTool inline verification should not fail with -v ONLY option even if verification fails. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5804) Implement strong verification with -v ONLY option for old design of secondary indexes
[ https://issues.apache.org/jira/browse/PHOENIX-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5804: Attachment: (was: PHOENIX-5804.4.x.v2.patch) > Implement strong verification with -v ONLY option for old design of secondary > indexes > - > > Key: PHOENIX-5804 > URL: https://issues.apache.org/jira/browse/PHOENIX-5804 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Attachments: PHOENIX-5804.4.x.v1.patch, PHOENIX-5804.4.x.v3.patch > > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, with -v ONLY option we get weak verification i.e. we find out if > index row is present or not. It does not provide information on if the values > have mismatch when executed with old design. > We attempt to provide scrutiny like implementation but way faster. The > verification will be done only one the latest version of the row. > This will help us in quantifying the success of new self-consistent secondary > indexes design. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5804) Implement strong verification with -v ONLY option for old design of secondary indexes
[ https://issues.apache.org/jira/browse/PHOENIX-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5804: Description: Currently, with -v ONLY option we get weak verification i.e. we find out if index row is present or not. It does not provide information on if the values have mismatch when executed with old design. We attempt to provide scrutiny like implementation but way faster. The verification will be done only one the latest version of the row. This will help us in quantifying the success of new self-consistent secondary indexes design. was:In case of the initial rebuild, empty column value will be x and for other unverified rows it will be x02 so we want to make sure that the appropriate counts are reflected when inline verification is run for an index with BEFORE/AFTER/ONLY/BOTH options. > Implement strong verification with -v ONLY option for old design of secondary > indexes > - > > Key: PHOENIX-5804 > URL: https://issues.apache.org/jira/browse/PHOENIX-5804 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > > Currently, with -v ONLY option we get weak verification i.e. we find out if > index row is present or not. It does not provide information on if the values > have mismatch when executed with old design. > We attempt to provide scrutiny like implementation but way faster. The > verification will be done only one the latest version of the row. > This will help us in quantifying the success of new self-consistent secondary > indexes design. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5804) Implement strong verification with -v ONLY option for old design of secondary indexes
[ https://issues.apache.org/jira/browse/PHOENIX-5804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5804: Summary: Implement strong verification with -v ONLY option for old design of secondary indexes (was: Categorize the index verification results based on empty column value) > Implement strong verification with -v ONLY option for old design of secondary > indexes > - > > Key: PHOENIX-5804 > URL: https://issues.apache.org/jira/browse/PHOENIX-5804 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > > In case of the initial rebuild, empty column value will be x and for other > unverified rows it will be x02 so we want to make sure that the appropriate > counts are reflected when inline verification is run for an index with > BEFORE/AFTER/ONLY/BOTH options. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5804) Categorize the index verification results based on empty column value
Swaroopa Kadam created PHOENIX-5804: --- Summary: Categorize the index verification results based on empty column value Key: PHOENIX-5804 URL: https://issues.apache.org/jira/browse/PHOENIX-5804 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam In case of the initial rebuild, empty column value will be x and for other unverified rows it will be x02 so we want to make sure that the appropriate counts are reflected when inline verification is run for an index with BEFORE/AFTER/ONLY/BOTH options. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5783) Implement starttime in IndexTool for rebuild and verification
Swaroopa Kadam created PHOENIX-5783: --- Summary: Implement starttime in IndexTool for rebuild and verification Key: PHOENIX-5783 URL: https://issues.apache.org/jira/browse/PHOENIX-5783 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime parameters. This feature is only for indexes on non-transactional tables . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5732) Implement endtime in IndexTool for rebuild and verification
[ https://issues.apache.org/jira/browse/PHOENIX-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5732: Summary: Implement endtime in IndexTool for rebuild and verification (was: Implement starttime, endtime in IndexTool for rebuild and verification) > Implement endtime in IndexTool for rebuild and verification > --- > > Key: PHOENIX-5732 > URL: https://issues.apache.org/jira/browse/PHOENIX-5732 > Project: Phoenix > Issue Type: New Feature > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > Attachments: PHOENIX-5732.4.x.v1.patch, PHOENIX-5732.4.x.v2.patch, > PHOENIX-5732.4.x.v3.patch, PHOENIX-5732.4.x.v4.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > IndexTool's inline verification and rebuild should be able to perform the > logic for a specified time range given by starttime and endtime parameters. > > This feature is only for indexes on non-transactional tables > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5732) Implement endtime in IndexTool for rebuild and verification
[ https://issues.apache.org/jira/browse/PHOENIX-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5732: Description: IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by endtime parameters. This feature is only for indexes on non-transactional tables . was: IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. This feature is only for indexes on non-transactional tables . > Implement endtime in IndexTool for rebuild and verification > --- > > Key: PHOENIX-5732 > URL: https://issues.apache.org/jira/browse/PHOENIX-5732 > Project: Phoenix > Issue Type: New Feature > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > Attachments: PHOENIX-5732.4.x.v1.patch, PHOENIX-5732.4.x.v2.patch, > PHOENIX-5732.4.x.v3.patch, PHOENIX-5732.4.x.v4.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > IndexTool's inline verification and rebuild should be able to perform the > logic for a specified time range given by endtime parameters. > > This feature is only for indexes on non-transactional tables > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5732) Implement starttime, endtime in IndexTool for rebuild and verification
[ https://issues.apache.org/jira/browse/PHOENIX-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5732: Description: IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. This feature is only for indexes on non-transactional tables . was: IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. This feature is only for non-transactional indexes. > Implement starttime, endtime in IndexTool for rebuild and verification > -- > > Key: PHOENIX-5732 > URL: https://issues.apache.org/jira/browse/PHOENIX-5732 > Project: Phoenix > Issue Type: New Feature > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > Attachments: PHOENIX-5732.4.x.v1.patch, PHOENIX-5732.4.x.v2.patch, > PHOENIX-5732.4.x.v3.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > IndexTool's inline verification and rebuild should be able to perform the > logic for a specified time range given by starttime and endtime parameters. > > This feature is only for indexes on non-transactional tables > . -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5732) Implement starttime, endtime in IndexTool for rebuild and verification
[ https://issues.apache.org/jira/browse/PHOENIX-5732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5732: Description: IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. This feature is only for non-transactional indexes. was:IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. > Implement starttime, endtime in IndexTool for rebuild and verification > -- > > Key: PHOENIX-5732 > URL: https://issues.apache.org/jira/browse/PHOENIX-5732 > Project: Phoenix > Issue Type: New Feature > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > Attachments: PHOENIX-5732.4.x.v1.patch, PHOENIX-5732.4.x.v2.patch, > PHOENIX-5732.4.x.v3.patch > > Time Spent: 2h 20m > Remaining Estimate: 0h > > IndexTool's inline verification and rebuild should be able to perform the > logic for a specified time range given by starttime and endtime parameters. > > This feature is only for non-transactional indexes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException
[ https://issues.apache.org/jira/browse/PHOENIX-5750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5750: Attachment: (was: PHOENIX-5750.4.x.v1.patch) > Upsert on immutable table fails with AccessDeniedException > -- > > Key: PHOENIX-5750 > URL: https://issues.apache.org/jira/browse/PHOENIX-5750 > Project: Phoenix > Issue Type: Bug >Affects Versions: 4.15.0, 4.14.3 > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > Attachments: PHOENIX-5750.4.x-HBase-1.3.v1.patch, > PHOENIX-5750.4.x-HBase-1.3.v2.patch > > Time Spent: 20m > Remaining Estimate: 0h > > {code:java} > // code placeholder > In TableDDLPermissionsIT > @Test > public void testUpsertIntoImmutableTable() throws Throwable { > startNewMiniCluster(); > final String schema = "TEST_INDEX_VIEW"; > final String tableName = "TABLE_DDL_PERMISSION_IT"; > final String phoenixTableName = schema + "." + tableName; > grantSystemTableAccess(); > try { > superUser1.runAs(new PrivilegedExceptionAction() { > @Override > public Void run() throws Exception { > try { > verifyAllowed(createSchema(schema), superUser1); > verifyAllowed(onlyCreateTable(phoenixTableName), > superUser1); > } catch (Throwable e) { > if (e instanceof Exception) { > throw (Exception)e; > } else { > throw new Exception(e); > } > } > return null; > } > }); > if (isNamespaceMapped) { > grantPermissions(unprivilegedUser.getShortName(), schema, > Action.WRITE, Action.READ,Action.EXEC); > } > // we should be able to read the data from another index as well to > which we have not given any access to > // this user > verifyAllowed(upsertRowsIntoTable(phoenixTableName), > unprivilegedUser); > } finally { > revokeAll(); > } > } > in BasePermissionsIT: > AccessTestAction onlyCreateTable(final String tableName) throws SQLException { > return new AccessTestAction() { > @Override > public Object run() throws Exception { > try (Connection conn = getConnection(); Statement stmt = > conn.createStatement()) { > assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName > + "(pk INTEGER not null primary key, data VARCHAR, > val integer)")); > } > return null; > } > }; > } > AccessTestAction upsertRowsIntoTable(final String tableName) throws > SQLException { > return new AccessTestAction() { > @Override > public Object run() throws Exception { > try (Connection conn = getConnection()) { > try (PreparedStatement pstmt = conn.prepareStatement( > "UPSERT INTO " + tableName + " values(?, ?, ?)")) { > for (int i = 0; i < NUM_RECORDS; i++) { > pstmt.setInt(1, i); > pstmt.setString(2, Integer.toString(i)); > pstmt.setInt(3, i); > assertEquals(1, pstmt.executeUpdate()); > } > } > conn.commit(); > } > return null; > } > }; > }{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5765) Add unit tests for prepareIndexMutationsForRebuild() of IndexRebuildRegionScanner
[ https://issues.apache.org/jira/browse/PHOENIX-5765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5765: Summary: Add unit tests for prepareIndexMutationsForRebuild() of IndexRebuildRegionScanner (was: Unit-tests for prepareIndexMutationsForRebuild) > Add unit tests for prepareIndexMutationsForRebuild() of > IndexRebuildRegionScanner > - > > Key: PHOENIX-5765 > URL: https://issues.apache.org/jira/browse/PHOENIX-5765 > Project: Phoenix > Issue Type: Sub-task > Reporter: Swaroopa Kadam >Assignee: Weiming Wang >Priority: Major > > Unit-tests for prepareIndexMutationsForRebuild -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5765) Unit-tests for prepareIndexMutationsForRebuild
Swaroopa Kadam created PHOENIX-5765: --- Summary: Unit-tests for prepareIndexMutationsForRebuild Key: PHOENIX-5765 URL: https://issues.apache.org/jira/browse/PHOENIX-5765 Project: Phoenix Issue Type: Sub-task Reporter: Swaroopa Kadam Assignee: Weiming Wang Unit-tests for prepareIndexMutationsForRebuild -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5751) Optimize localCache utilization in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5751: Summary: Optimize localCache utilization in IndexUtil#isGlobalIndexCheckEnabled() (was: Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()) > Optimize localCache utilization in IndexUtil#isGlobalIndexCheckEnabled() > > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 4.15.0, 4.14.3 > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5751: --- Assignee: Swaroopa Kadam > Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() > - > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 4.15.0, 4.14.3 > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5751: Affects Version/s: 4.15.0 4.14.3 > Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() > - > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 4.15.0, 4.14.3 > Reporter: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5751: Fix Version/s: 4.15.1 5.1.0 > Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() > - > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5751: --- Assignee: (was: Swaroopa Kadam) > Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() > - > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam >Priority: Major > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
[ https://issues.apache.org/jira/browse/PHOENIX-5751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5751: Issue Type: Improvement (was: Bug) > Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() > - > > Key: PHOENIX-5751 > URL: https://issues.apache.org/jira/browse/PHOENIX-5751 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam >Priority: Major > > We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5751) Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled()
Swaroopa Kadam created PHOENIX-5751: --- Summary: Optimize localCache utilitzation in IndexUtil#isGlobalIndexCheckEnabled() Key: PHOENIX-5751 URL: https://issues.apache.org/jira/browse/PHOENIX-5751 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam We don't need to add and check if globalIndexChecker is enabled on data table. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5750) Upsert on immutable table fails with AccessDeniedException
Swaroopa Kadam created PHOENIX-5750: --- Summary: Upsert on immutable table fails with AccessDeniedException Key: PHOENIX-5750 URL: https://issues.apache.org/jira/browse/PHOENIX-5750 Project: Phoenix Issue Type: Bug Affects Versions: 4.14.3, 4.15.0 Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.15.1 {code:java} // code placeholder In TableDDLPermissionsIT @Test public void testUpsertIntoImmutableTable() throws Throwable { startNewMiniCluster(); final String schema = "TEST_INDEX_VIEW"; final String tableName = "TABLE_DDL_PERMISSION_IT"; final String phoenixTableName = schema + "." + tableName; grantSystemTableAccess(); try { superUser1.runAs(new PrivilegedExceptionAction() { @Override public Void run() throws Exception { try { verifyAllowed(createSchema(schema), superUser1); verifyAllowed(onlyCreateTable(phoenixTableName), superUser1); } catch (Throwable e) { if (e instanceof Exception) { throw (Exception)e; } else { throw new Exception(e); } } return null; } }); if (isNamespaceMapped) { grantPermissions(unprivilegedUser.getShortName(), schema, Action.WRITE, Action.READ,Action.EXEC); } // we should be able to read the data from another index as well to which we have not given any access to // this user verifyAllowed(upsertRowsIntoTable(phoenixTableName), unprivilegedUser); } finally { revokeAll(); } } in BasePermissionsIT: AccessTestAction onlyCreateTable(final String tableName) throws SQLException { return new AccessTestAction() { @Override public Object run() throws Exception { try (Connection conn = getConnection(); Statement stmt = conn.createStatement()) { assertFalse(stmt.execute("CREATE IMMUTABLE TABLE " + tableName + "(pk INTEGER not null primary key, data VARCHAR, val integer)")); } return null; } }; } AccessTestAction upsertRowsIntoTable(final String tableName) throws SQLException { return new AccessTestAction() { @Override public Object run() throws Exception { try (Connection conn = getConnection()) { try (PreparedStatement pstmt = conn.prepareStatement( "UPSERT INTO " + tableName + " values(?, ?, ?)")) { for (int i = 0; i < NUM_RECORDS; i++) { pstmt.setInt(1, i); pstmt.setString(2, Integer.toString(i)); pstmt.setInt(3, i); assertEquals(1, pstmt.executeUpdate()); } } conn.commit(); } return null; } }; }{code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5734) IndexScrutinyTool should not report rows beyond maxLookBack age
[ https://issues.apache.org/jira/browse/PHOENIX-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5734: Description: Index Scrutiny tool should not report row mismatch if the row gets rewritten during the run and the last version around is beyond max look back age which will then get removed by compaction. Or add another type of counters to separate invalid rows into BEYOND_MAX_LOOKBACK or similar was:Index Scrutiny tool should not report row mismatch if the row gets rewritten during the run and the last version around is beyond max look back age which will then get removed by compaction. > IndexScrutinyTool should not report rows beyond maxLookBack age > --- > > Key: PHOENIX-5734 > URL: https://issues.apache.org/jira/browse/PHOENIX-5734 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > > Index Scrutiny tool should not report row mismatch if the row gets rewritten > during the run and the last version around is beyond max look back age which > will then get removed by compaction. > > Or add another type of counters to separate invalid rows into > BEYOND_MAX_LOOKBACK or similar -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5734) IndexScrutinyTool should not report rows beyond maxLookBack age
[ https://issues.apache.org/jira/browse/PHOENIX-5734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5734: Fix Version/s: 4.16.0 > IndexScrutinyTool should not report rows beyond maxLookBack age > --- > > Key: PHOENIX-5734 > URL: https://issues.apache.org/jira/browse/PHOENIX-5734 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.16.0 > > > Index Scrutiny tool should not report row mismatch if the row gets rewritten > during the run and the last version around is beyond max look back age which > will then get removed by compaction. > > Or add another type of counters to separate invalid rows into > BEYOND_MAX_LOOKBACK or similar -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Assigned] (PHOENIX-5735) IndexTool's inline verification should not verify rows beyond max lookback age
[ https://issues.apache.org/jira/browse/PHOENIX-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam reassigned PHOENIX-5735: --- Assignee: Weiming Wang > IndexTool's inline verification should not verify rows beyond max lookback age > -- > > Key: PHOENIX-5735 > URL: https://issues.apache.org/jira/browse/PHOENIX-5735 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam >Assignee: Weiming Wang >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > IndexTool's inline verification should not verify rows beyond max lookback age > Similar to Phoenix-5734 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5735) IndexTool's inline verification should not verify rows beyond max lookback age
[ https://issues.apache.org/jira/browse/PHOENIX-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5735: Fix Version/s: 4.15.1 5.1.0 > IndexTool's inline verification should not verify rows beyond max lookback age > -- > > Key: PHOENIX-5735 > URL: https://issues.apache.org/jira/browse/PHOENIX-5735 > Project: Phoenix > Issue Type: Improvement > Reporter: Swaroopa Kadam >Priority: Major > Fix For: 5.1.0, 4.15.1 > > > IndexTool's inline verification should not verify rows beyond max lookback age > Similar to Phoenix-5734 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5735) IndexTool's inline verification should not verify rows beyond max lookback age
Swaroopa Kadam created PHOENIX-5735: --- Summary: IndexTool's inline verification should not verify rows beyond max lookback age Key: PHOENIX-5735 URL: https://issues.apache.org/jira/browse/PHOENIX-5735 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam IndexTool's inline verification should not verify rows beyond max lookback age Similar to Phoenix-5734 -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5734) IndexScrutinyTool should not report rows beyond maxLookBack age
Swaroopa Kadam created PHOENIX-5734: --- Summary: IndexScrutinyTool should not report rows beyond maxLookBack age Key: PHOENIX-5734 URL: https://issues.apache.org/jira/browse/PHOENIX-5734 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Index Scrutiny tool should not report row mismatch if the row gets rewritten during the run and the last version around is beyond max look back age which will then get removed by compaction. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5733) new index on a table with old corpoc should also use old design unless upgraded
Swaroopa Kadam created PHOENIX-5733: --- Summary: new index on a table with old corpoc should also use old design unless upgraded Key: PHOENIX-5733 URL: https://issues.apache.org/jira/browse/PHOENIX-5733 Project: Phoenix Issue Type: Improvement Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Currently, if the table uses old design indexer coproc, creating an index with new design enabled (Not recommended) will end up loading new corpoc on index. This is prone to errors and incorrectness in design. Hence, we should create an index with old coproc if the table has not been upgraded yet although the new design flag is enabled. Later, when ready upgrade the pair using IndexUpgradeTool -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5732) Implement starttime, endtime in IndexTool for rebuild and verification
Swaroopa Kadam created PHOENIX-5732: --- Summary: Implement starttime, endtime in IndexTool for rebuild and verification Key: PHOENIX-5732 URL: https://issues.apache.org/jira/browse/PHOENIX-5732 Project: Phoenix Issue Type: New Feature Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.15.1 IndexTool's inline verification and rebuild should be able to perform the logic for a specified time range given by starttime and endtime parameters. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5713) Incorrectly handled view and view indexes with/without namespace in IndexScrutinyMapper#getTtl()
Swaroopa Kadam created PHOENIX-5713: --- Summary: Incorrectly handled view and view indexes with/without namespace in IndexScrutinyMapper#getTtl() Key: PHOENIX-5713 URL: https://issues.apache.org/jira/browse/PHOENIX-5713 Project: Phoenix Issue Type: Bug Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5713) Incorrectly handled view and view indexes with/without namespace in IndexScrutinyMapper#getTtl()
[ https://issues.apache.org/jira/browse/PHOENIX-5713?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Swaroopa Kadam updated PHOENIX-5713: Fix Version/s: 4.15.1 > Incorrectly handled view and view indexes with/without namespace in > IndexScrutinyMapper#getTtl() > > > Key: PHOENIX-5713 > URL: https://issues.apache.org/jira/browse/PHOENIX-5713 > Project: Phoenix > Issue Type: Bug > Reporter: Swaroopa Kadam > Assignee: Swaroopa Kadam >Priority: Major > Fix For: 4.15.1 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (PHOENIX-5707) Index rebuild after truncate incorrectly writes the included column value
Swaroopa Kadam created PHOENIX-5707: --- Summary: Index rebuild after truncate incorrectly writes the included column value Key: PHOENIX-5707 URL: https://issues.apache.org/jira/browse/PHOENIX-5707 Project: Phoenix Issue Type: Bug Affects Versions: 4.15.0 Reporter: Swaroopa Kadam Assignee: Swaroopa Kadam Fix For: 5.1.0, 4.15.1 {code:java} @Test public void testIncorrectRebuild() throws Exception { String schemaName = generateUniqueName(); String dataTableName = generateUniqueName(); String dataTableFullName = SchemaUtil.getTableName(schemaName, dataTableName); String indexTableName = generateUniqueName(); String indexTableFullName = SchemaUtil.getTableName(schemaName, indexTableName); Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); try (Connection conn = DriverManager.getConnection(getUrl(), props)) { conn.setAutoCommit(true); conn.createStatement().execute("create table " + dataTableFullName + " (id varchar(10) not null primary key, val1 varchar(10), val2 varchar(10), val3 varchar(10)) COLUMN_ENCODED_BYTES=0"); conn.createStatement().execute("CREATE INDEX " + indexTableName + " on " + dataTableFullName + " (val1) include (val2, val3)" ); //insert a full row conn.createStatement().execute("upsert into " + dataTableFullName + " values ('a', 'ab', 'efgh', 'abcd')"); Thread.sleep(1000); // insert a partial row conn.createStatement().execute("upsert into " + dataTableFullName + " (id, val3) values ('a', 'uvwx')"); Thread.sleep(1000); //insert a full row conn.createStatement().execute("upsert into " + dataTableFullName + " values ('a', 'ab', 'efgh', 'yuio')"); Thread.sleep(1000); //insert a partial row conn.createStatement().execute("upsert into " + dataTableFullName + " (id, val3) values ('a', 'asdf')"); //truncate index table ConnectionQueryServices queryServices = conn.unwrap(PhoenixConnection.class).getQueryServices(); Admin admin = queryServices.getAdmin(); TableName tableName = TableName.valueOf(indexTableFullName); admin.disableTable(tableName); admin.truncateTable(tableName, false); //rebuild index runIndexTool(true, false, schemaName, dataTableName, indexTableName); // we expect 2 versions to be written after rebuild, one for last full row update and one for latest update //assert Scan scan = new Scan(); scan.setRaw(true); scan.setMaxVersions(10); HTable indexTable = new HTable(getUtility().getConfiguration(), indexTableFullName); HTable dataTable = new HTable(getUtility().getConfiguration(), dataTableFullName); long dataFullRowTS = 0; ResultScanner rs = dataTable.getScanner(scan); for (Result r : rs) { for (Cell c : r.listCells()) { String column = new String(CellUtil.cloneQualifier(c)); String value = new String(CellUtil.cloneValue(c)); if (column.equalsIgnoreCase("VAL3") && value.equalsIgnoreCase("yuio")) { dataFullRowTS = c.getTimestamp(); } } } rs = indexTable.getScanner(scan); for (Result r : rs) { for (Cell c : r.listCells()) { long indexTS = c.getTimestamp(); String column = new String(CellUtil.cloneQualifier(c)); if (column.equalsIgnoreCase("0:VAL3") && indexTS == dataFullRowTS) { String value = new String(CellUtil.cloneValue(c)); assertEquals("yuio", value); // if the ts is from full rebuild row , value should also be from full rebuild row } } } } } {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)