[jira] [Commented] (CALCITE-3050) Integrate SqlDialect and SqlParser.Config
[ https://issues.apache.org/jira/browse/CALCITE-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834338#comment-16834338 ] Danny Chan commented on CALCITE-3050: - I'm a little confused about why we need to config SqlParser.Config from SqlDialect, cause they have nothing in common for config items, although they both can be seen a SqlDialect. > Integrate SqlDialect and SqlParser.Config > - > > Key: CALCITE-3050 > URL: https://issues.apache.org/jira/browse/CALCITE-3050 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Danny Chan >Priority: Major > > {{SqlDialect}} is used by the JDBC adapter to generate SQL in the target > dialect of a data source. {{SqlParser.Config}} is used to set what the parser > should allow for SQL statements sent to Calcite. But they both are a > representation of "dialect". And they come together when we want to use a > Babel parser to understand SQL statements that are meant for a data source. > So it makes sense to integrate them, somehow. We could add a method > {code}void SqlParser.ConfigBuilder.setFrom(SqlDialect dialect){code} or do it > from the other end, {code}SqlDialect.configureParser(SqlParser.ConfigBuilder > configBuilder){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (CALCITE-3050) Integrate SqlDialect and SqlParser.Config
[ https://issues.apache.org/jira/browse/CALCITE-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Danny Chan reassigned CALCITE-3050: --- Assignee: Danny Chan > Integrate SqlDialect and SqlParser.Config > - > > Key: CALCITE-3050 > URL: https://issues.apache.org/jira/browse/CALCITE-3050 > Project: Calcite > Issue Type: Bug >Reporter: Julian Hyde >Assignee: Danny Chan >Priority: Major > > {{SqlDialect}} is used by the JDBC adapter to generate SQL in the target > dialect of a data source. {{SqlParser.Config}} is used to set what the parser > should allow for SQL statements sent to Calcite. But they both are a > representation of "dialect". And they come together when we want to use a > Babel parser to understand SQL statements that are meant for a data source. > So it makes sense to integrate them, somehow. We could add a method > {code}void SqlParser.ConfigBuilder.setFrom(SqlDialect dialect){code} or do it > from the other end, {code}SqlDialect.configureParser(SqlParser.ConfigBuilder > configBuilder){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3003) AssertionError when GROUP BY nested field
[ https://issues.apache.org/jira/browse/CALCITE-3003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16834207#comment-16834207 ] Will Yu commented on CALCITE-3003: -- Thanks [~Chunwei Lei] for responsive and detailed reviews. > AssertionError when GROUP BY nested field > - > > Key: CALCITE-3003 > URL: https://issues.apache.org/jira/browse/CALCITE-3003 > Project: Calcite > Issue Type: Improvement > Components: core >Affects Versions: 1.19.0 >Reporter: Will Yu >Assignee: Will Yu >Priority: Minor > Labels: pull-request-available > Fix For: 1.20.0 > > Time Spent: 2h 20m > Remaining Estimate: 0h > > Calcite will throw AssertionError when GROUP BY nested field > {code:java} > @Test > public void test() { > final String sql = "select coord.x, avg(coord.y) from customer.contact_peek > GROUP BY coord.x"; > sql(sql).ok(); > }{code} > > The stacktrace is > {code:java} > java.lang.AssertionError > at > org.apache.calcite.sql.validate.SqlValidatorUtil.analyzeGroupExpr(SqlValidatorUtil.java:839) > at > org.apache.calcite.sql.validate.SqlValidatorUtil.convertGroupSet(SqlValidatorUtil.java:791) > at > org.apache.calcite.sql.validate.SqlValidatorUtil.analyzeGroupItem(SqlValidatorUtil.java:748) > at > org.apache.calcite.sql.validate.AggregatingSelectScope.resolve(AggregatingSelectScope.java:104) > at > org.apache.calcite.sql.validate.AggregatingSelectScope.lambda$new$0(AggregatingSelectScope.java:65) > at com.google.common.base.Suppliers$MemoizingSupplier.get(Suppliers.java:131) > at > org.apache.calcite.sql.validate.AggregatingSelectScope.nullifyType(AggregatingSelectScope.java:178) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.deriveTypeImpl(SqlValidatorImpl.java:1680) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.deriveType(SqlValidatorImpl.java:1664) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.expandSelectItem(SqlValidatorImpl.java:467) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelectList(SqlValidatorImpl.java:4112) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateSelect(SqlValidatorImpl.java:3375) > at > org.apache.calcite.sql.validate.SelectNamespace.validateImpl(SelectNamespace.java:60) > at > org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:996) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateQuery(SqlValidatorImpl.java:956) > at org.apache.calcite.sql.SqlSelect.validate(SqlSelect.java:216) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validateScopedExpression(SqlValidatorImpl.java:931) > at > org.apache.calcite.sql.validate.SqlValidatorImpl.validate(SqlValidatorImpl.java:638) > at > org.apache.calcite.test.SqlToRelTestBase$TesterImpl.convertSqlToRel(SqlToRelTestBase.java:608) > at > org.apache.calcite.test.SqlToRelTestBase$TesterImpl.assertConvertsTo(SqlToRelTestBase.java:723) > at > org.apache.calcite.test.SqlToRelConverterTest$Sql.convertsTo(SqlToRelConverterTest.java:3301) > at > org.apache.calcite.test.SqlToRelConverterTest$Sql.ok(SqlToRelConverterTest.java:3293) > at > org.apache.calcite.test.SqlToRelConverterTest.test(SqlToRelConverterTest.java:2680){code} > The root cause is obvious and fix is just to remove the assertion line. > Question is given that GROUP BY item should be validated beforehand, can we > just delete this assertion? -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (CALCITE-3050) Integrate SqlDialect and SqlParser.Config
Julian Hyde created CALCITE-3050: Summary: Integrate SqlDialect and SqlParser.Config Key: CALCITE-3050 URL: https://issues.apache.org/jira/browse/CALCITE-3050 Project: Calcite Issue Type: Bug Reporter: Julian Hyde {{SqlDialect}} is used by the JDBC adapter to generate SQL in the target dialect of a data source. {{SqlParser.Config}} is used to set what the parser should allow for SQL statements sent to Calcite. But they both are a representation of "dialect". And they come together when we want to use a Babel parser to understand SQL statements that are meant for a data source. So it makes sense to integrate them, somehow. We could add a method {code}void SqlParser.ConfigBuilder.setFrom(SqlDialect dialect){code} or do it from the other end, {code}SqlDialect.configureParser(SqlParser.ConfigBuilder configBuilder){code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (CALCITE-3049) When simplifying expressions, revisit "IS NULL" if its argument has been simplified
[ https://issues.apache.org/jira/browse/CALCITE-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julian Hyde resolved CALCITE-3049. -- Resolution: Fixed Fix Version/s: 1.20.0 Fixed in https://github.com/apache/calcite/commit/247c7d4f76a3d7d862ae6f4148cc8e6556efa497, a collaboration between me and [~danny0405]. > When simplifying expressions, revisit "IS NULL" if its argument has been > simplified > --- > > Key: CALCITE-3049 > URL: https://issues.apache.org/jira/browse/CALCITE-3049 > Project: Calcite > Issue Type: Bug > Components: core >Reporter: Julian Hyde >Assignee: Danny Chan >Priority: Major > Labels: pull-request-available > Fix For: 1.20.0 > > Time Spent: 20m > Remaining Estimate: 0h > > When simplifying expressions, revisit "IS NULL" if its argument has been > simplified. For example, we currently simplify {code}(CASE WHEN FALSE THEN > +(v0) ELSE -1 END) IS UNKNOWN{code} to {code}-1 IS UNKNOWN{code} but we > should further simplify that to {{FALSE}}. > I have a preliminary [dev > branch|https://github.com/julianhyde/calcite/tree/3049-simplify-is-null], but > it needs a little more debugging. I'd be grateful if someone could finish it. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833777#comment-16833777 ] Bohdan Kazydub commented on CALCITE-3020: - [~vlsi], Calcite has DynamicRecordType which is used in validation phase, after the validation a list of columns is transformed into 'plain' RelDataType, that may or may not contain a dynamic star. For example, if SALES.NATION is dynamic table then for {code} SELECT n_nationkey FROM SALES.NATION {code} there will be no DYNAMIC STAR. > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833724#comment-16833724 ] Vladimir Sitnikov commented on CALCITE-3020: [~godfreyhe], it would be really nice if you provided a full test case, and not just a SQL. > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833718#comment-16833718 ] Vladimir Sitnikov edited comment on CALCITE-3020 at 5/6/19 11:29 AM: - {quote} scans with dynamic stars only - it can't be determined if the table is dynamic {quote} AFAIK validator somehow is able to tell if the used columns are valid. That implies one can tell if table scan contains dynamic_star or not. There are org.apache.calcite.rel.type.RelDataTypeField#isDynamicStar, and org.apache.calcite.rel.type.DynamicRecordType#isDynamicStarColName methods in Calcite codebase, so it should be possible to tell if the star is present or not. Adding types to all scans would increase verbosity of the plans which is sad was (Author: vladimirsitnikov): {quote} scans with dynamic stars only - it can't be determined if the table is dynamic {quote} AFAIK validator somehow is able to tell if the used columns are valid. That implies one can tell if table scan contains dynamic_star or not. There are org.apache.calcite.rel.type.RelDataTypeField#isDynamicStar, and org.apache.calcite.rel.type.DynamicRecordType#isDynamicStarColName > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833718#comment-16833718 ] Vladimir Sitnikov commented on CALCITE-3020: {quote} scans with dynamic stars only - it can't be determined if the table is dynamic {quote} AFAIK validator somehow is able to tell if the used columns are valid. That implies one can tell if table scan contains dynamic_star or not. There are org.apache.calcite.rel.type.RelDataTypeField#isDynamicStar, and org.apache.calcite.rel.type.DynamicRecordType#isDynamicStarColName > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833716#comment-16833716 ] Vladimir Sitnikov commented on CALCITE-3020: {quote} dynamic star is not reliable as it may be missing in some cases{quote} What do you mean by that? > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (CALCITE-3020) throws AssertionError:Type mismatch in VolcanoPlanner
[ https://issues.apache.org/jira/browse/CALCITE-3020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833696#comment-16833696 ] Bohdan Kazydub commented on CALCITE-3020: - [~vlsi], currently the type is included (except for INT, BOOLEAN and TIME, TIMESTAMP, DATE with precision == 0) in Values' digest. Regarding you proposal with enhancing digest for scans with dynamic stars only - it can't be determined if the table is dynamic (with RelOptTable) and dynamic star is not reliable as it may be missing in some cases. Therefore, I think it is better to enhance digest for all tablescans to avoid the issue. > throws AssertionError:Type mismatch in VolcanoPlanner > - > > Key: CALCITE-3020 > URL: https://issues.apache.org/jira/browse/CALCITE-3020 > Project: Calcite > Issue Type: Bug > Components: core >Affects Versions: 1.19.0 >Reporter: godfrey he >Assignee: Danny Chan >Priority: Major > > after [CALCITE-2454|https://issues.apache.org/jira/browse/CALCITE-2454] > merged, an AssertionError:Type mismatch will be thrown in VolcanoPlanner when > running the following sql: > {code:sql} > WITH t1 AS (SELECT CAST(a as BIGINT) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as BIGINT)), > t2 AS (SELECT CAST(a as DOUBLE) AS a, SUM(b) AS b FROM x GROUP BY CAST(a > as DOUBLE)) > SELECT t1.*, t2.* FROM t1, t2 WHERE t1.b = t2.b > {code} > Caused by: java.lang.AssertionError: Type mismatch: > left: > RecordType(BIGINT a, BIGINT b) NOT NULL > right: > RecordType(DOUBLE a, BIGINT b) NOT NULL > at org.apache.calcite.util.Litmus$1.fail(Litmus.java:31) > at org.apache.calcite.plan.RelOptUtil.equal(RelOptUtil.java:1858) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.registerImpl(VolcanoPlanner.java:1705) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.register(VolcanoPlanner.java:850) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:872) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.ensureRegistered(VolcanoPlanner.java:1958) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.transformTo(VolcanoRuleCall.java:126) > at > org.apache.calcite.plan.RelOptRuleCall.transformTo(RelOptRuleCall.java:234) > at > org.apache.calcite.rel.convert.ConverterRule.onMatch(ConverterRule.java:141) > at > org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:205) > at > org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:637) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (CALCITE-2741) Add operator table with Hive-specific built-in functions
[ https://issues.apache.org/jira/browse/CALCITE-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833649#comment-16833649 ] Lai Zhou edited comment on CALCITE-2741 at 5/6/19 9:49 AM: --- [~zabetak],I also think it was not exactly an adapter. My initial goal was to build a real-time/high-performance in memory sql engine that supports hive sql dialects on top of Calcite. I had a try to use the JDBC interface first, but I encountered some issues: # custom config issue: For every JDBC connection, we need put the data of current session into the schema, it means that current schema is bound to current session. So the static SchemaFactory can't work out for this, we need introduce the DDL functions like what was in calcite-server module. The SqlDdlNodes in calcite-server module would populate the table through FrameworkConfig API . When we execute a sql like {code:java} create table t1 as select * from t2 where t2.id>100{code} the populate method will be invoked,see [SqlDdlNodes.java#L221|https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/server/src/main/java/org/apache/calcite/sql/ddl/SqlDdlNodes.java#L221] . We need custom the FrameworkConfig here, include OperatorTable,SqlConformance and more other custom configs. By the way, the FrameworkConfig should be builded with all the configs from current CalcitePrepare.Context rather than only the rootSchema , it was a bug. And the config options of CalcitePrepare.Context was just a subset of FrameworkConfig, most of the time we need use the FrameworkConfig API directly to build a new sql engine. When we execute a sql like {code:java} select * from t2 where t2.id>100 {code} CalcitePrepareImpl would handle this sql flow, it did the similar thing, but some configs are hard coded , such as RexExecutor,Programs. When implementing the EnumerableRel, the RelImplementor also might need be customized, see the example [HiveEnumerableRelImplementor.java|https://github.com/51nb/marble/blob/master/marble-table-hive/src/main/java/org/apache/calcite/adapter/hive/HiveEnumerableRelImplementor.java]. Now the JDBC interface didn't provide the way to custom these configs, so we proposed a new Table API that inspired by Apache Flink, to simplify the usage of Calcite when building a new sql engine. 2. cache issue: It's not easy to cache the whole sql plan if we use JDBC interface to handle a query, due to it's multiple-phase processing flow, but it is very easy to do this with the Table API,see [TableEnv.java#L412|https://github.com/51nb/marble/blob/master/marble-table/src/main/java/org/apache/calcite/table/TableEnv.java#L412]. summary: The proposed Table API makes it easy to config the sql engine and cache the whole sql plan to improve the query performance.It fits the scenes that satisfy these conditions: the datasources are deterministic and already in memory, there is no computation need to be pushed down; -the sql queries are deterministic,without dynamic parameters, so the whole sql plan cache will be helpful(we can also use placeholders in the execution plan to cache the dynamic query ).- was (Author: hhlai1990): [~zabetak],I also think it was not exactly an adapter. My initial goal was to build a real-time/high-performance in memory sql engine that supports hive sql dialects on top of Calcite. I had a try to use the JDBC interface first, but I encountered some issues: # custom config issue: For every JDBC connection, we need put the data of current session into the schema, it means that current schema is bound to current session. So the static SchemaFactory can't work out for this, we need introduce the DDL functions like what was in calcite-server module. The SqlDdlNodes in calcite-server module would populate the table through FrameworkConfig API . When we execute a sql like {code:java} create table t1 as select * from t2 where t2.id>100{code} the populate method will be invoked,see [SqlDdlNodes.java#L221|https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/server/src/main/java/org/apache/calcite/sql/ddl/SqlDdlNodes.java#L221] . We need custom the FrameworkConfig here, include OperatorTable,SqlConformance and more other custom configs. By the way, the FrameworkConfig should be builded with all the configs from current CalcitePrepare.Context rather than only the rootSchema , it was a bug. And the config options of CalcitePrepare.Context was just a subset of FrameworkConfig, most of the time we need use the FrameworkConfig API directly to build a new sql engine. When we execute a sql like {code:java} select * from t2 where t2.id>100 {code} CalcitePrepareImpl would handle this sql flow, it did the similar thing, but some configs are hard coded , such as RexExecutor,Programs. When implementing the EnumerableRel,
[jira] [Comment Edited] (CALCITE-2741) Add operator table with Hive-specific built-in functions
[ https://issues.apache.org/jira/browse/CALCITE-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833649#comment-16833649 ] Lai Zhou edited comment on CALCITE-2741 at 5/6/19 9:36 AM: --- [~zabetak],I also think it was not exactly an adapter. My initial goal was to build a real-time/high-performance in memory sql engine that supports hive sql dialects on top of Calcite. I had a try to use the JDBC interface first, but I encountered some issues: # custom config issue: For every JDBC connection, we need put the data of current session into the schema, it means that current schema is bound to current session. So the static SchemaFactory can't work out for this, we need introduce the DDL functions like what was in calcite-server module. The SqlDdlNodes in calcite-server module would populate the table through FrameworkConfig API . When we execute a sql like {code:java} create table t1 as select * from t2 where t2.id>100{code} the populate method will be invoked,see [SqlDdlNodes.java#L221|https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/server/src/main/java/org/apache/calcite/sql/ddl/SqlDdlNodes.java#L221] . We need custom the FrameworkConfig here, include OperatorTable,SqlConformance and more other custom configs. By the way, the FrameworkConfig should be builded with all the configs from current CalcitePrepare.Context rather than only the rootSchema , it was a bug. And the config options of CalcitePrepare.Context was just a subset of FrameworkConfig, most of the time we need use the FrameworkConfig API directly to build a new sql engine. When we execute a sql like {code:java} select * from t2 where t2.id>100 {code} CalcitePrepareImpl would handle this sql flow, it did the similar thing, but some configs are hard coded , such as RexExecutor,Programs. When implementing the EnumerableRel, the RelImplementor also might need be customized, see the example [HiveEnumerableRelImplementor.java|https://github.com/51nb/marble/blob/master/marble-table-hive/src/main/java/org/apache/calcite/adapter/hive/HiveEnumerableRelImplementor.java]. Now the JDBC interface didn't provide the way to custom these configs, so we proposed a new Table API that inspired by Apache Flink, to simplify the usage of Calcite when building a new sql engine. 2. cache issue: It's not easy to cache the whole sql plan if we use JDBC interface to handle a query, due to it's multiple-phase processing flow, but it is very easy to do this with the Table API,see [TableEnv.java#L412|https://github.com/51nb/marble/blob/master/marble-table/src/main/java/org/apache/calcite/table/TableEnv.java#L412]. summary: The proposed Table API makes it easy to config the sql engine and cache the whole sql plan to improve the query performance.It fits the scenes that satisfy these conditions: the datasources are deterministic and already in memory, there is no computation need to be pushed down; the sql queries are deterministic,without dynamic parameters, so the whole sql plan cache will be helpful(we can also use placeholders in the execution plan to cache the dynamic query ). was (Author: hhlai1990): [~zabetak],I also think it was not exactly an adapter. My initial goal was to build a real-time/high-performance in memory sql engine that supports hive sql dialects on top of Calcite. I had a try to use the JDBC interface first, but I encountered some issues: # custom config issue: For every JDBC connection, we need put the data of current session into the schema, it means that current schema is bound to current session. So the static SchemaFactory can't work out for this, we need introduce the DDL functions like what was in calcite-server module. The SqlDdlNodes in calcite-server module would populate the table through FrameworkConfig API . When we execute a sql like {code:java} create table t1 as select * from t2 where t2.id>100{code} the populate method will be invoked,see [SqlDdlNodes.java#L221|https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/server/src/main/java/org/apache/calcite/sql/ddl/SqlDdlNodes.java#L221] . We need custom the FrameworkConfig here, include OperatorTable,SqlConformance and more other custom configs. By the way, the FrameworkConfig should be builded with all the configs from current CalcitePrepare.Context rather than only the rootSchema , it was a bug. And the config options of CalcitePrepare.Context was just a subset of FrameworkConfig, most of the time we need use the FrameworkConfig API directly to build a new sql engine. When we execute a sql like {code:java} select * from t2 where t2.id>100 {code} CalcitePrepareImpl would handle this sql flow, it did the similar thing, but some configs are hard coded , such as RexExecutor,Programs. When implementing the EnumerableRel, th
[jira] [Commented] (CALCITE-2741) Add operator table with Hive-specific built-in functions
[ https://issues.apache.org/jira/browse/CALCITE-2741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16833649#comment-16833649 ] Lai Zhou commented on CALCITE-2741: --- [~zabetak],I also think it was not exactly an adapter. My initial goal was to build a real-time/high-performance in memory sql engine that supports hive sql dialects on top of Calcite. I had a try to use the JDBC interface first, but I encountered some issues: # custom config issue: For every JDBC connection, we need put the data of current session into the schema, it means that current schema is bound to current session. So the static SchemaFactory can't work out for this, we need introduce the DDL functions like what was in calcite-server module. The SqlDdlNodes in calcite-server module would populate the table through FrameworkConfig API . When we execute a sql like {code:java} create table t1 as select * from t2 where t2.id>100{code} the populate method will be invoked,see [SqlDdlNodes.java#L221|https://github.com/apache/calcite/blob/0d504d20d47542e8d461982512ae0e7a94e4d6cb/server/src/main/java/org/apache/calcite/sql/ddl/SqlDdlNodes.java#L221] . We need custom the FrameworkConfig here, include OperatorTable,SqlConformance and more other custom configs. By the way, the FrameworkConfig should be builded with all the configs from current CalcitePrepare.Context rather than only the rootSchema , it was a bug. And the config options of CalcitePrepare.Context was just a subset of FrameworkConfig, most of the time we need use the FrameworkConfig API directly to build a new sql engine. When we execute a sql like {code:java} select * from t2 where t2.id>100 {code} CalcitePrepareImpl would handle this sql flow, it did the similar thing, but some configs are hard coded , such as RexExecutor,Programs. When implementing the EnumerableRel, the RelImplementor also might need be customized, see the example [HiveEnumerableRelImplementor.java|https://github.com/51nb/marble/blob/master/marble-table-hive/src/main/java/org/apache/calcite/adapter/hive/HiveEnumerableRelImplementor.java]. Now the JDBC interface didn't provide the way to custom these configs, so we proposed a new Table API that inspired by Apache Flink, to simplify the usage of Calcite when building a new sql engine. 2. cache issue: It's not easy to cache the whole sql plan if we use JDBC interface to handle a query, due to it's multiple-phase processing flow, but it is very easy to do this with the Table API,see [TableEnv.java#L412|https://github.com/51nb/marble/blob/master/marble-table/src/main/java/org/apache/calcite/table/TableEnv.java#L412]. summary: The proposed Table API makes it easy to config the sql engine and cache the whole sql plan to improve the query performance.It fits the scenes that satisfy these conditions: the datasources are deterministic and already in memory, there is no computation need to be pushed down; the sql queries are deterministic, so the whole sql plan cache will be helpful; > Add operator table with Hive-specific built-in functions > > > Key: CALCITE-2741 > URL: https://issues.apache.org/jira/browse/CALCITE-2741 > Project: Calcite > Issue Type: New Feature > Components: core >Affects Versions: 1.19.0 >Reporter: Lai Zhou >Priority: Minor > > I write a hive adapter for calcite to support Hive sql ,includes > UDF、UDAF、UDTF and some of SqlSpecialOperator. > How do you think of supporting a direct implemention of hive sql like this? > I think it will be valuable when someone want to migrate his hive etl jobs to > real-time scene. -- This message was sent by Atlassian JIRA (v7.6.3#76005)