[jira] [Updated] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison
[ https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated PHOENIX-5265: -- Description: Currently, in Phoenix Test, the comparison of the returned result and the expected result is always hard-coded, which is especially widely used by E2E and Integration test for comparing the query result, including the result of EXPLAIN query plan. This has significantly impaired the productivity and efficiency of workin on Phoenix. The right approach is, each test case should write the generated result to a file under a new directory, then compares this file with a gold file under "gold" directory. After we make some code changes and make sure the change is correct by manually verifying several specific test cases, we can safely replace the whole "gold" directory with the directory which contains all new generated test files during test running. In this way, we can easily rebase all the tests. Or we can provide new API with object based ExplainPlan comparison so that only necessary plan attributes can be tested rather than comparing whole plan. For example, in BaseStatsCollectorIT.java, we verify the estimated row size in the returned result of EXPLAIN query plan, the row size is decided by many factors, like the column is encoded or not, it is mutable or not, it uses transaction provider or not, it uses TEPHRA or OMID as transaction provider, etc. The code snippet "testWithMultiCF" shows a typical test case. The comparisons of the result result and the expected result are hard-coded in those asserts. Now imagine, if we change the way collecting stats or we change column encoding scheme or we change the cell storage format for TEPHRA or OMID, which is very likely to happen, then we need to manually change all those hard-coded comparison everywhere, and it isn't trivial to re-calculate all expected row sizes with the different conditions . Today you might need one or two weeks to rebase all the tests, which is a huge waste. We should use "gold" files here, so that we can rebase the test very easily. BTW, the new generated test result files and gold files should be in XML or JSON. The result of "EXPLAN" query should be in XML or JSON too, because the file format matches the structure of a query plan. {code:java} // code placeholder @Test public void testWithMultiCF() throws Exception { int nRows = 20; Connection conn = getConnection(0); PreparedStatement stmt; conn.createStatement().execute( "CREATE TABLE " + fullTableName + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v INTEGER NULL, d.v INTEGER NULL) " + tableDDLOptions ); stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " VALUES(?,?, ?, ?, ?)"); byte[] val = new byte[250]; for (int i = 0; i < nRows; i++) { stmt.setString(1, Character.toString((char)('a' + i)) + Bytes.toString(val)); stmt.setInt(2, i); stmt.setInt(3, i); stmt.setInt(4, i); stmt.setInt(5, i); stmt.executeUpdate(); } conn.commit(); stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, d.v) VALUES(?,?,?)"); for (int i = 0; i < 5; i++) { stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + Bytes.toString(val)); stmt.setInt(2, i); stmt.setInt(3, i); stmt.executeUpdate(); } conn.commit(); ResultSet rs; String actualExplainPlan; collectStatistics(conn, fullTableName); List keyRanges = getAllSplits(conn, fullTableName); assertEquals(26, keyRanges.size()); rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + fullTableName); actualExplainPlan = QueryUtil.getExplainPlan(rs); assertEquals( "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" : "14190" ) : (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? "25320" : "12420") + " BYTES PARALLEL 1-WAY FULL SCAN OVER " + physicalTableName, actualExplainPlan); ConnectionQueryServices services = conn.unwrap(PhoenixConnection.class).getQueryServices(); List regions = services.getAllTableRegions(Bytes.toBytes(physicalTableName)); assertEquals(1, regions.size()); collectStatistics(conn, fullTableName, Long.toString(1000)); keyRanges = getAllSplits(conn, fullTableName); boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded; boolean hasShadowCells = TransactionFactory.Provider.OMID.name().equals(transactionProvider); assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 : 13, keyRanges.size()); rs = conn .createStatement() .executeQuery( "SELECT COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from
[jira] [Updated] (PHOENIX-5265) [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison
[ https://issues.apache.org/jira/browse/PHOENIX-5265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Viraj Jasani updated PHOENIX-5265: -- Summary: [UMBRELLA] Phoenix Test should use object based Plan for result comparison instead of using hard-corded comparison (was: [UMBRELLA] Phoenix Test should use gold files for result comparison instead of using hard-corded comparison.) > [UMBRELLA] Phoenix Test should use object based Plan for result comparison > instead of using hard-corded comparison > -- > > Key: PHOENIX-5265 > URL: https://issues.apache.org/jira/browse/PHOENIX-5265 > Project: Phoenix > Issue Type: Improvement > Environment: {code:java} > // code placeholder > @Test > public void testWithMultiCF() throws Exception { > int nRows = 20; > Connection conn = getConnection(0); > PreparedStatement stmt; > conn.createStatement().execute( > "CREATE TABLE " + fullTableName > + "(k VARCHAR PRIMARY KEY, a.v INTEGER, b.v INTEGER, c.v > INTEGER NULL, d.v INTEGER NULL) " > + tableDDLOptions ); > stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + " > VALUES(?,?, ?, ?, ?)"); > byte[] val = new byte[250]; > for (int i = 0; i < nRows; i++) { > stmt.setString(1, Character.toString((char)('a' + i)) + > Bytes.toString(val)); > stmt.setInt(2, i); > stmt.setInt(3, i); > stmt.setInt(4, i); > stmt.setInt(5, i); > stmt.executeUpdate(); > } > conn.commit(); > stmt = conn.prepareStatement("UPSERT INTO " + fullTableName + "(k, c.v, > d.v) VALUES(?,?,?)"); > for (int i = 0; i < 5; i++) { > stmt.setString(1, Character.toString((char)('a' + 'z' + i)) + > Bytes.toString(val)); > stmt.setInt(2, i); > stmt.setInt(3, i); > stmt.executeUpdate(); > } > conn.commit(); > ResultSet rs; > String actualExplainPlan; > collectStatistics(conn, fullTableName); > List keyRanges = getAllSplits(conn, fullTableName); > assertEquals(26, keyRanges.size()); > rs = conn.createStatement().executeQuery("EXPLAIN SELECT * FROM " + > fullTableName); > actualExplainPlan = QueryUtil.getExplainPlan(rs); > assertEquals( > "CLIENT 26-CHUNK 25 ROWS " + (columnEncoded ? ( mutable ? "12530" > : "14190" ) : > (TransactionFactory.Provider.OMID.name().equals(transactionProvider)) ? > "25320" : "12420") + > " BYTES PARALLEL 1-WAY FULL SCAN OVER " + > physicalTableName, > actualExplainPlan); > ConnectionQueryServices services = > conn.unwrap(PhoenixConnection.class).getQueryServices(); > List regions = > services.getAllTableRegions(Bytes.toBytes(physicalTableName)); > assertEquals(1, regions.size()); > collectStatistics(conn, fullTableName, Long.toString(1000)); > keyRanges = getAllSplits(conn, fullTableName); > boolean oneCellPerColFamilyStorageScheme = !mutable && columnEncoded; > boolean hasShadowCells = > TransactionFactory.Provider.OMID.name().equals(transactionProvider); > assertEquals(oneCellPerColFamilyStorageScheme ? 14 : hasShadowCells ? 24 > : 13, keyRanges.size()); > rs = conn > .createStatement() > .executeQuery( > "SELECT > COLUMN_FAMILY,SUM(GUIDE_POSTS_ROW_COUNT),SUM(GUIDE_POSTS_WIDTH),COUNT(*) from > \"SYSTEM\".STATS where PHYSICAL_NAME = '" > + physicalTableName + "' GROUP BY COLUMN_FAMILY > ORDER BY COLUMN_FAMILY"); > assertTrue(rs.next()); > assertEquals("A", rs.getString(1)); > assertEquals(25, rs.getInt(2)); > assertEquals(columnEncoded ? ( mutable ? 12530 : 14190 ) : hasShadowCells > ? 25320 : 12420, rs.getInt(3)); > assertEquals(oneCellPerColFamilyStorageScheme ? 13 : hasShadowCells ? 23 > : 12, rs.getInt(4)); > assertTrue(rs.next()); > assertEquals("B", rs.getString(1)); > assertEquals(oneCellPerColFamilyStorageScheme ? 25 : 20, rs.getInt(2)); > assertEquals(columnEncoded ? ( mutable ? 5600 : 7260 ) : hasShadowCells ? > 11260 : 5540, rs.getInt(3)); > assertEquals(oneCellPerColFamilyStorageScheme ? 7 : hasShadowCells ? 10 : > 5, rs.getInt(4)); > assertTrue(rs.next()); > assertEquals("C", rs.getString(1)); > assertEquals(25, rs.getInt(2)); > assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? > 14085 : 6930, rs.getInt(3)); > assertEquals(hasShadowCells ? 13 : 7, rs.getInt(4)); > assertTrue(rs.next()); > assertEquals("D", rs.getString(1)); > assertEquals(25, rs.getInt(2)); > assertEquals(columnEncoded ? ( mutable ? 7005 : 7280 ) : hasShadowCells ? > 14085 : 6930, rs.getInt(3)); > assertEquals(hasShadowCells ? 13
[jira] [Resolved] (PHOENIX-6137) Update Omid to 1.0.2 and Tephra to 0.16 in 4.x
[ https://issues.apache.org/jira/browse/PHOENIX-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved PHOENIX-6137. -- Fix Version/s: 4.16.0 Resolution: Fixed Both done together with master. > Update Omid to 1.0.2 and Tephra to 0.16 in 4.x > -- > > Key: PHOENIX-6137 > URL: https://issues.apache.org/jira/browse/PHOENIX-6137 > Project: Phoenix > Issue Type: Task > Components: core >Affects Versions: 4.16.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 4.16.0 > > > On master, we are using SNAPSHOT versions of Omid and Tephra due to the > ongoing PHOENIX-6010 work. > As 4.x is not affected, it still uses the current released versions of Omid > and Tephra. > (Which means no HBase 1.5+ for Tephra, but it is a known issue) > This ticket is partly a reminder to do this, and a place to discuss it, as > well as an umbrella ticket where I plan to collect the JIRAs that also need > to be ported to 4.x for the update. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-5728) ExplainPlan with plan as attributes object, use it for BaseStatsCollectorIT
[ https://issues.apache.org/jira/browse/PHOENIX-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated PHOENIX-5728: - Fix Version/s: (was: 4.17.0) (was: 4.16.1) 4.16.0 Affects Version/s: (was: 4.15.1) 4.15.0 > ExplainPlan with plan as attributes object, use it for BaseStatsCollectorIT > --- > > Key: PHOENIX-5728 > URL: https://issues.apache.org/jira/browse/PHOENIX-5728 > Project: Phoenix > Issue Type: Sub-task >Affects Versions: 4.15.0, 5.1.0 >Reporter: Viraj Jasani >Assignee: Viraj Jasani >Priority: Major > Fix For: 5.1.0, 4.16.0 > > Attachments: PHOENIX-5728.4.x.000.patch, > PHOENIX-5728.master.000.patch, PHOENIX-5728.master.000.patch, > PHOENIX-5728.master.001.patch > > > Provide -golden files- attributes based object for QueryPlan result > comparison for all abstract test class BaseStatsCollectorIT that includes > tests of all stats collections including: > * NamespaceDisabledStatsCollectorIT > * NamespaceEnabledStatsCollectorIT > * NonTxStatsCollectorIT > * TxStatsCollectorIT > Object based ExplainPlan can be useful for quite precise comparison of > individual and relevant plan attributes. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Commented] (OMID-192) fix missing jcommander dependency
[ https://issues.apache.org/jira/browse/OMID-192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244991#comment-17244991 ] Istvan Toth commented on OMID-192: -- Committed to master. Thanks for finding the bug and the review [~RichardAntal] > fix missing jcommander dependency > - > > Key: OMID-192 > URL: https://issues.apache.org/jira/browse/OMID-192 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.0.2 >Reporter: Richard Antal >Assignee: Istvan Toth >Priority: Blocker > Fix For: 1.0.3 > > > When I started the the _omid.sh create-hbase-commit-table > I got the following exception: > {code:java} > Error: A JNI error has occurred, please check your installation and try again > Exception in thread "main" java.lang.NoClassDefFoundError: > com/beust/jcommander/ParameterException{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (OMID-191) Fix missing executable permission because of MASSEMBLY-941
[ https://issues.apache.org/jira/browse/OMID-191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated OMID-191: - Affects Version/s: 1.0.2 > Fix missing executable permission because of MASSEMBLY-941 > -- > > Key: OMID-191 > URL: https://issues.apache.org/jira/browse/OMID-191 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.0.2 >Reporter: Richard Antal >Assignee: Richard Antal >Priority: Blocker > Fix For: 1.0.3 > > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Resolved] (OMID-192) fix missing jcommander dependency
[ https://issues.apache.org/jira/browse/OMID-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved OMID-192. -- Resolution: Fixed > fix missing jcommander dependency > - > > Key: OMID-192 > URL: https://issues.apache.org/jira/browse/OMID-192 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.0.2 >Reporter: Richard Antal >Assignee: Istvan Toth >Priority: Major > Fix For: 1.0.3 > > > When I started the the _omid.sh create-hbase-commit-table > I got the following exception: > {code:java} > Error: A JNI error has occurred, please check your installation and try again > Exception in thread "main" java.lang.NoClassDefFoundError: > com/beust/jcommander/ParameterException{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (OMID-192) fix missing jcommander dependency
[ https://issues.apache.org/jira/browse/OMID-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated OMID-192: - Priority: Blocker (was: Major) > fix missing jcommander dependency > - > > Key: OMID-192 > URL: https://issues.apache.org/jira/browse/OMID-192 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.0.2 >Reporter: Richard Antal >Assignee: Istvan Toth >Priority: Blocker > Fix For: 1.0.3 > > > When I started the the _omid.sh create-hbase-commit-table > I got the following exception: > {code:java} > Error: A JNI error has occurred, please check your installation and try again > Exception in thread "main" java.lang.NoClassDefFoundError: > com/beust/jcommander/ParameterException{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (OMID-192) fix missing jcommander dependency
[ https://issues.apache.org/jira/browse/OMID-192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth updated OMID-192: - Fix Version/s: 1.0.3 Affects Version/s: 1.0.2 > fix missing jcommander dependency > - > > Key: OMID-192 > URL: https://issues.apache.org/jira/browse/OMID-192 > Project: Phoenix Omid > Issue Type: Bug >Affects Versions: 1.0.2 >Reporter: Richard Antal >Assignee: Istvan Toth >Priority: Major > Fix For: 1.0.3 > > > When I started the the _omid.sh create-hbase-commit-table > I got the following exception: > {code:java} > Error: A JNI error has occurred, please check your installation and try again > Exception in thread "main" java.lang.NoClassDefFoundError: > com/beust/jcommander/ParameterException{code} > -- This message was sent by Atlassian Jira (v8.3.4#803005)
[GitHub] [phoenix-omid] stoty closed pull request #83: OMID-192 fix missing jcommander dependency
stoty closed pull request #83: URL: https://github.com/apache/phoenix-omid/pull/83 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org
[jira] [Resolved] (PHOENIX-6245) Update tephra dependency version to 0.16.0
[ https://issues.apache.org/jira/browse/PHOENIX-6245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Istvan Toth resolved PHOENIX-6245. -- Fix Version/s: 4.16.0 5.1.0 Resolution: Fixed Committed to master and 4.x Thanks for the reviews Viraj and shahrs87. > Update tephra dependency version to 0.16.0 > -- > > Key: PHOENIX-6245 > URL: https://issues.apache.org/jira/browse/PHOENIX-6245 > Project: Phoenix > Issue Type: Improvement > Components: core >Affects Versions: 5.1.0, 4.16.0 >Reporter: Istvan Toth >Assignee: Istvan Toth >Priority: Major > Fix For: 5.1.0, 4.16.0 > > > For master this mostly a cleanup, as the SNAPSHOT that we were using had > already basically the same code. > For 4.x this is a meaningful update, with real fixes and enabling Tephra > support for HBase 1.5 and 1.6. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6211) Paged scan filters
[ https://issues.apache.org/jira/browse/PHOENIX-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kadir OZDEMIR updated PHOENIX-6211: --- Attachment: PHOENIX-6211.master.007.patch > Paged scan filters > -- > > Key: PHOENIX-6211 > URL: https://issues.apache.org/jira/browse/PHOENIX-6211 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.0.0, 4.14.3 >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6211.4.x.001.patch, > PHOENIX-6211.master.001.patch, PHOENIX-6211.master.002.patch, > PHOENIX-6211.master.003.patch, PHOENIX-6211.master.004.patch, > PHOENIX-6211.master.005.patch, PHOENIX-6211.master.006.patch, > PHOENIX-6211.master.007.patch > > > Phoenix performs two main operations on the server side: aggregation and > filtering. However, currently there is no internal Phoenix paging capability, > and thus server side operations can take long enough to lead to HBase client > timeouts. PHOENIX-5998 and PHOENIX-6207 are for providing the paging > capability for ungrouped and grouped aggregate operations. This improvement > Jira is for adding the paging capability for scan filters. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Updated] (PHOENIX-6211) Paged scan filters
[ https://issues.apache.org/jira/browse/PHOENIX-6211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kadir OZDEMIR updated PHOENIX-6211: --- Attachment: PHOENIX-6211.4.x.001.patch > Paged scan filters > -- > > Key: PHOENIX-6211 > URL: https://issues.apache.org/jira/browse/PHOENIX-6211 > Project: Phoenix > Issue Type: Improvement >Affects Versions: 5.0.0, 4.14.3 >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Fix For: 4.16.0 > > Attachments: PHOENIX-6211.4.x.001.patch, > PHOENIX-6211.master.001.patch, PHOENIX-6211.master.002.patch, > PHOENIX-6211.master.003.patch, PHOENIX-6211.master.004.patch, > PHOENIX-6211.master.005.patch, PHOENIX-6211.master.006.patch > > > Phoenix performs two main operations on the server side: aggregation and > filtering. However, currently there is no internal Phoenix paging capability, > and thus server side operations can take long enough to lead to HBase client > timeouts. PHOENIX-5998 and PHOENIX-6207 are for providing the paging > capability for ungrouped and grouped aggregate operations. This improvement > Jira is for adding the paging capability for scan filters. -- This message was sent by Atlassian Jira (v8.3.4#803005)