Apache-Phoenix | 4.16 | HBase 1.4 | Build #4 FAILURE
4.16 branch HBase 1.4 build #4 status FAILURE Build #4 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/4/
Apache-Phoenix | 4.16 | HBase 1.6 | Build #4 SUCCESS
4.16 branch HBase 1.6 build #4 status SUCCESS Build #4 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/4/
Apache-Phoenix | 4.16 | HBase 1.3 | Build #4 FAILURE
4.16 branch HBase 1.3 build #4 status FAILURE Build #4 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/4/
Apache-Phoenix | master | HBase 2.2 | Build #191 SUCCESS
master branch HBase 2.2 build #191 status SUCCESS Build #191 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/191/
Apache-Phoenix | 4.16 | HBase 1.4 | Build #3 SUCCESS
4.16 branch HBase 1.4 build #3 status SUCCESS Build #3 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/3/
Apache-Phoenix | 4.x | HBase 1.3 | Build #198 SUCCESS
4.x branch HBase 1.3 build #198 status SUCCESS Build #198 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/198/
Apache-Phoenix | 4.16 | HBase 1.6 | Build #3 SUCCESS
4.16 branch HBase 1.6 build #3 status SUCCESS Build #3 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/3/
Apache-Phoenix | 4.x | HBase 1.4 | Build #198 SUCCESS
4.x branch HBase 1.4 build #198 status SUCCESS Build #198 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/198/
Apache-Phoenix | 4.16 | HBase 1.3 | Build #3 FAILURE
4.16 branch HBase 1.3 build #3 status FAILURE Build #3 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/3/
Apache-Phoenix | 4.x | HBase 1.6 | Build #198 SUCCESS
4.x branch HBase 1.6 build #198 status SUCCESS Build #198 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.x/198/
Apache-Phoenix | master | HBase 2.1 | Build #191 SUCCESS
master branch HBase 2.1 build #191 status SUCCESS Build #191 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/191/
Apache-Phoenix | master | HBase 2.3 | Build #191 FAILURE
master branch HBase 2.3 build #191 status FAILURE Build #191 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/191/
Apache-Phoenix | master | HBase 2.4 | Build #191 FAILURE
master branch HBase 2.4 build #191 status FAILURE Build #191 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/191/
[phoenix] branch master updated (edbd6e8 -> 9c346ed)
This is an automated email from the ASF dual-hosted git repository. skadam pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/phoenix.git. from edbd6e8 PHOENIX-6305 Throttling decision does not take offheap memstore size into account add 9c346ed PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum) No new revisions were added by this update. Summary of changes: .../java/org/apache/phoenix/util/SchemaUtil.java | 38 -- .../phoenix/schema/SchemaExtractionToolIT.java | 30 ++--- 2 files changed, 61 insertions(+), 7 deletions(-)
Apache-Phoenix | master | HBase 2.2 | Build #190 SUCCESS
master branch HBase 2.2 build #190 status SUCCESS Build #190 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/190/
[phoenix] branch 4.16 updated: PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum)
This is an automated email from the ASF dual-hosted git repository. skadam pushed a commit to branch 4.16 in repository https://gitbox.apache.org/repos/asf/phoenix.git The following commit(s) were added to refs/heads/4.16 by this push: new e493710 PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum) e493710 is described below commit e493710aaa3be4ae3cbd03561670e1e99cc30964 Author: Swaroopa Kadam AuthorDate: Fri Jan 8 13:10:53 2021 -0800 PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum) --- .../java/org/apache/phoenix/util/SchemaUtil.java | 38 -- .../phoenix/schema/SchemaExtractionToolIT.java | 30 ++--- 2 files changed, 61 insertions(+), 7 deletions(-) diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java b/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java index c453dd1..6abc2a2 100644 --- a/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java +++ b/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java @@ -1232,11 +1232,45 @@ public class SchemaUtil { return columnParseNode.getName(); } +/** + * This function is needed so that SchemaExtractionTool returns a valid DDL with correct + * table/schema name that can be parsed + * + * @param pSchemaName + * @param pTableName + * @return quoted string if schema or table name has non-alphabetic characters in it. + */ public static String getPTableFullNameWithQuotes(String pSchemaName, String pTableName) { String pTableFullName = getQualifiedTableName(pSchemaName, pTableName); -if(!(Character.isAlphabetic(pTableName.charAt(0 { -pTableFullName = pSchemaName+".\""+pTableName+"\""; +boolean tableNameNeedsQuotes = isQuotesNeeded(pTableName); +boolean schemaNameNeedsQuotes = isQuotesNeeded(pSchemaName); + +if(schemaNameNeedsQuotes) { +pSchemaName= "\""+pSchemaName+"\""; } +if(tableNameNeedsQuotes) { +pTableName = "\""+pTableName+"\""; +} +if(tableNameNeedsQuotes || schemaNameNeedsQuotes) { +pTableFullName = pSchemaName + "." + pTableName; +} + return pTableFullName; } + +private static boolean isQuotesNeeded(String name) { +// first char numeric or non-underscore +if(!Character.isAlphabetic(name.charAt(0)) && name.charAt(0)!='_') { +return true; +} +// for all other chars +// ex. name like z@@ will need quotes whereas t0001 will not need quotes +for (int i=1; i queries = new ArrayList(){}; +queries.add(createTableStmt); +queries.add(createView); +String result = runSchemaExtractionTool(schemaName, viewName, null, queries); +Assert.assertEquals(createView.toUpperCase(), result.toUpperCase()); + +} + +@Test +public void testCreateViewStatement_customName() throws Exception { +String tableName = generateUniqueName(); +String schemaName = generateUniqueName(); +String viewName = generateUniqueName()+"@@"; +String properties = "TTL=2592000,IMMUTABLE_ROWS=true,DISABLE_WAL=true"; + +String pTableFullName = SchemaUtil.getQualifiedTableName(schemaName, tableName); +String createTableStmt = "CREATE TABLE "+pTableFullName + "(k BIGINT NOT NULL PRIMARY KEY, " ++ "v1 VARCHAR, v2 VARCHAR)" ++ properties; +String viewFullName = SchemaUtil.getPTableFullNameWithQuotes(schemaName, viewName); + +String createView = "CREATE VIEW "+viewFullName + "(id1 BIGINT, id2 BIGINT NOT NULL, " + "id3 VARCHAR NOT NULL CONSTRAINT PKVIEW PRIMARY KEY (id2, id3 DESC)) " + "AS SELECT * FROM "+pTableFullName; List queries = new ArrayList(){}; queries.add(createTableStmt); queries.add(createView); -queries.add(createView1); String result = runSchemaExtractionTool(schemaName, viewName, null, queries); Assert.assertEquals(createView.toUpperCase(), result.toUpperCase()); @@ -153,7 +173,7 @@ public class SchemaExtractionToolIT extends ParallelStatsEnabledIT { String pTableFullName = SchemaUtil.getQualifiedTableName(schemaName, tableName); String createTableStmt = "CREATE TABLE "+pTableFullName + "(k BIGINT NOT NULL PRIMARY KEY, " + "v1 VARCHAR, v2 VARCHAR)"; -String viewFullName = SchemaUtil.getQualifiedTableName(schemaName, viewName); +String viewFullName = SchemaUtil.getPTableFullNameWithQuotes(schemaName, viewName); String createViewStmt = "CREATE VIEW "+viewFullName + "(id1 BIGINT, id2 BIGINT NOT NULL, " + "id3 VARCHAR NOT NULL CONSTRAINT PKVIEW PRIMARY KEY (id2, id3 DESC)) " + "AS SELECT * FROM
[phoenix] branch 4.x updated: PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum)
This is an automated email from the ASF dual-hosted git repository. skadam pushed a commit to branch 4.x in repository https://gitbox.apache.org/repos/asf/phoenix.git The following commit(s) were added to refs/heads/4.x by this push: new 2a530da PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum) 2a530da is described below commit 2a530da17693b6a06b7d21c2566439ffc440822f Author: Swaroopa Kadam AuthorDate: Fri Jan 8 13:10:53 2021 -0800 PHOENIX-6148: [SchemaExtractionTool] DDL parsing exception in Phoenix in view name (Addendum) --- .../java/org/apache/phoenix/util/SchemaUtil.java | 38 -- .../phoenix/schema/SchemaExtractionToolIT.java | 30 ++--- 2 files changed, 61 insertions(+), 7 deletions(-) diff --git a/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java b/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java index c453dd1..6abc2a2 100644 --- a/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java +++ b/phoenix-core/src/main/java/org/apache/phoenix/util/SchemaUtil.java @@ -1232,11 +1232,45 @@ public class SchemaUtil { return columnParseNode.getName(); } +/** + * This function is needed so that SchemaExtractionTool returns a valid DDL with correct + * table/schema name that can be parsed + * + * @param pSchemaName + * @param pTableName + * @return quoted string if schema or table name has non-alphabetic characters in it. + */ public static String getPTableFullNameWithQuotes(String pSchemaName, String pTableName) { String pTableFullName = getQualifiedTableName(pSchemaName, pTableName); -if(!(Character.isAlphabetic(pTableName.charAt(0 { -pTableFullName = pSchemaName+".\""+pTableName+"\""; +boolean tableNameNeedsQuotes = isQuotesNeeded(pTableName); +boolean schemaNameNeedsQuotes = isQuotesNeeded(pSchemaName); + +if(schemaNameNeedsQuotes) { +pSchemaName= "\""+pSchemaName+"\""; } +if(tableNameNeedsQuotes) { +pTableName = "\""+pTableName+"\""; +} +if(tableNameNeedsQuotes || schemaNameNeedsQuotes) { +pTableFullName = pSchemaName + "." + pTableName; +} + return pTableFullName; } + +private static boolean isQuotesNeeded(String name) { +// first char numeric or non-underscore +if(!Character.isAlphabetic(name.charAt(0)) && name.charAt(0)!='_') { +return true; +} +// for all other chars +// ex. name like z@@ will need quotes whereas t0001 will not need quotes +for (int i=1; i queries = new ArrayList(){}; +queries.add(createTableStmt); +queries.add(createView); +String result = runSchemaExtractionTool(schemaName, viewName, null, queries); +Assert.assertEquals(createView.toUpperCase(), result.toUpperCase()); + +} + +@Test +public void testCreateViewStatement_customName() throws Exception { +String tableName = generateUniqueName(); +String schemaName = generateUniqueName(); +String viewName = generateUniqueName()+"@@"; +String properties = "TTL=2592000,IMMUTABLE_ROWS=true,DISABLE_WAL=true"; + +String pTableFullName = SchemaUtil.getQualifiedTableName(schemaName, tableName); +String createTableStmt = "CREATE TABLE "+pTableFullName + "(k BIGINT NOT NULL PRIMARY KEY, " ++ "v1 VARCHAR, v2 VARCHAR)" ++ properties; +String viewFullName = SchemaUtil.getPTableFullNameWithQuotes(schemaName, viewName); + +String createView = "CREATE VIEW "+viewFullName + "(id1 BIGINT, id2 BIGINT NOT NULL, " + "id3 VARCHAR NOT NULL CONSTRAINT PKVIEW PRIMARY KEY (id2, id3 DESC)) " + "AS SELECT * FROM "+pTableFullName; List queries = new ArrayList(){}; queries.add(createTableStmt); queries.add(createView); -queries.add(createView1); String result = runSchemaExtractionTool(schemaName, viewName, null, queries); Assert.assertEquals(createView.toUpperCase(), result.toUpperCase()); @@ -153,7 +173,7 @@ public class SchemaExtractionToolIT extends ParallelStatsEnabledIT { String pTableFullName = SchemaUtil.getQualifiedTableName(schemaName, tableName); String createTableStmt = "CREATE TABLE "+pTableFullName + "(k BIGINT NOT NULL PRIMARY KEY, " + "v1 VARCHAR, v2 VARCHAR)"; -String viewFullName = SchemaUtil.getQualifiedTableName(schemaName, viewName); +String viewFullName = SchemaUtil.getPTableFullNameWithQuotes(schemaName, viewName); String createViewStmt = "CREATE VIEW "+viewFullName + "(id1 BIGINT, id2 BIGINT NOT NULL, " + "id3 VARCHAR NOT NULL CONSTRAINT PKVIEW PRIMARY KEY (id2, id3 DESC)) " + "AS SELECT * FROM
Apache-Phoenix | master | HBase 2.1 | Build #190 SUCCESS
master branch HBase 2.1 build #190 status SUCCESS Build #190 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/190/
Apache-Phoenix | master | HBase 2.3 | Build #190 FAILURE
master branch HBase 2.3 build #190 status FAILURE Build #190 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/190/
Apache-Phoenix | master | HBase 2.4 | Build #190 FAILURE
master branch HBase 2.4 build #190 status FAILURE Build #190 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/master/190/
[phoenix] branch master updated: PHOENIX-6305 Throttling decision does not take offheap memstore size into account
This is an automated email from the ASF dual-hosted git repository. stoty pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/phoenix.git The following commit(s) were added to refs/heads/master by this push: new edbd6e8 PHOENIX-6305 Throttling decision does not take offheap memstore size into account edbd6e8 is described below commit edbd6e8a78c098c3a6df73496272d2dd4858feec Author: Istvan Toth AuthorDate: Thu Jan 7 12:56:07 2021 +0100 PHOENIX-6305 Throttling decision does not take offheap memstore size into account Co-authored-by: Ankit Singhal --- .../apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java index 9bca44e..d96dfe3 100644 --- a/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java +++ b/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UngroupedAggregateRegionObserver.java @@ -260,7 +260,9 @@ public class UngroupedAggregateRegionObserver extends BaseScannerRegionObserver Mutation[] mutationArray = new Mutation[mutations.size()]; // When memstore size reaches blockingMemstoreSize we are waiting 3 seconds for the // flush happen which decrease the memstore size and then writes allowed on the region. - for (int i = 0; blockingMemstoreSize > 0 && region.getMemStoreHeapSize() > blockingMemstoreSize && i < 30; i++) { + for (int i = 0; blockingMemstoreSize > 0 + && region.getMemStoreHeapSize() + region.getMemStoreOffHeapSize() > blockingMemstoreSize + && i < 30; i++) { try { checkForRegionClosingOrSplitting(); Thread.sleep(100);
Apache-Phoenix | 4.16 | HBase 1.4 | Build #2 SUCCESS
4.16 branch HBase 1.4 build #2 status SUCCESS Build #2 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/2/
Apache-Phoenix | 4.16 | HBase 1.3 | Build #2 FAILURE
4.16 branch HBase 1.3 build #2 status FAILURE Build #2 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/2/
Apache-Phoenix | 4.16 | HBase 1.6 | Build #2 FAILURE
4.16 branch HBase 1.6 build #2 status FAILURE Build #2 https://ci-hadoop.apache.org/job/Phoenix/job/Phoenix-mulitbranch/job/4.16/2/