[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r310108126 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/type/SqlRowType.java ## @@ -51,6 +55,10 @@ public SqlRowType(SqlParserPos pos, return fieldTypes; } + public List getComments() { Review comment: @danny0405 We need to document all of that. I think we need to mark the DDL as beta for now. There are too many limitations. I hope we can it into a better shape in 1.10. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r310018499 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() { } @Test - public void testCreateTableWithDecimalType() { + public void testCreateTableWithComplexType() { check("CREATE TABLE tbl1 (\n" + - " a decimal, \n" + - " b decimal(10, 0),\n" + - " c decimal(38, 38),\n" + + " a ARRAY, \n" + + " b MAP,\n" + + " c ROW,\n" + + " d MULTISET,\n" + " PRIMARY KEY (a, b) \n" + ") with (\n" + " x = 'y', \n" + " asd = 'data'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + - " `A` DECIMAL,\n" + - " `B` DECIMAL(10, 0),\n" + - " `C` DECIMAL(38, 38),\n" + + " `A` ARRAY< BIGINT >,\n" + + " `B` MAP< INTEGER, VARCHAR >,\n" + + " `C` ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR >,\n" + + " `D` MULTISET< VARCHAR >,\n" + Review comment: Yes, that would also be an option. It is not really useful anyway and only exists to cover the corner cases of the standard. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r310001855 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/Fixture.java ## @@ -0,0 +1,115 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.type.SqlTypeName; + +import java.util.List; + +/** + * Type used during tests. + */ +public class Fixture { + private final RelDataTypeFactory typeFactory; + + final RelDataType char1Type; + final RelDataType char33Type; + final RelDataType varcharType; + final RelDataType varchar33Type; + final RelDataType booleanType; + final RelDataType binaryType; + final RelDataType binary33Type; + final RelDataType varbinaryType; + final RelDataType varbinary33Type; + final RelDataType decimalType; + final RelDataType decimalP10S0Type; + final RelDataType decimalP10S3Type; + final RelDataType tinyintType; + final RelDataType smallintType; + final RelDataType intType; + final RelDataType bigintType; + final RelDataType floatType; + final RelDataType doubleType; + final RelDataType dateType; + final RelDataType timeType; + final RelDataType time3Type; + final RelDataType timestampType; + final RelDataType timestamp3Type; + final RelDataType timestampWithLocalTimeZoneType; + final RelDataType timestamp3WithLocalTimeZoneType; + final RelDataType nullType; + + Fixture(RelDataTypeFactory typeFactory) { + this.typeFactory = typeFactory; + this.char1Type = typeFactory.createSqlType(SqlTypeName.CHAR); Review comment: Why? You also don't do it for `binaryType`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r31332 ## File path: flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl ## @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) : } } +SqlIdentifier FlinkCollectionsTypeName() : +{ +} +{ +LOOKAHEAD(2) + { +return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos()); +} +| + { +return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos()); +} +} + +SqlIdentifier FlinkTypeName() : +{ +final SqlTypeName sqlTypeName; +final SqlIdentifier typeName; +final Span s = Span.of(); +} +{ +( +<#-- additional types are included here --> +<#-- make custom data types in front of Calcite core data types --> +<#list parser.flinkDataTypeParserMethods as method> +<#if (method?index > 0)> +| + +LOOKAHEAD(2) +typeName = ${method} + +| +LOOKAHEAD(2) +sqlTypeName = SqlTypeName(s) { +typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this)); +} +| +LOOKAHEAD(2) +typeName = FlinkCollectionsTypeName() +| +typeName = CompoundIdentifier() { +throw new ParseException("UDT in DDL is not supported yet."); Review comment: Agreed, but please open an issue also in Flink to track the support. It is very confusing for users if data types are only available in certain SQL locations. Users don't understand the difference between DDL and DQL. Esp. with the introduction of computed columns it gets more confusing because special cases like `STRING` would be available in DDL but not in the `CAST` of the computed column. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r30347 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() { } @Test - public void testCreateTableWithDecimalType() { + public void testCreateTableWithComplexType() { check("CREATE TABLE tbl1 (\n" + - " a decimal, \n" + - " b decimal(10, 0),\n" + - " c decimal(38, 38),\n" + + " a ARRAY, \n" + + " b MAP,\n" + + " c ROW,\n" + + " d MULTISET,\n" + " PRIMARY KEY (a, b) \n" + ") with (\n" + " x = 'y', \n" + " asd = 'data'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + - " `A` DECIMAL,\n" + - " `B` DECIMAL(10, 0),\n" + - " `C` DECIMAL(38, 38),\n" + + " `A` ARRAY< BIGINT >,\n" + + " `B` MAP< INTEGER, VARCHAR >,\n" + + " `C` ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR >,\n" + + " `D` MULTISET< VARCHAR >,\n" + Review comment: That's why I would suggest to introduce `STRING` as early as possible such that people start using it. We don't need to drop the old behavior immediately but can do it maybe for Flink 2.0. This should have been discussed in FLIP-37. Now it is documented and assumed at a couple of places. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309779118 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() { } @Test - public void testCreateTableWithDecimalType() { + public void testCreateTableWithComplexType() { check("CREATE TABLE tbl1 (\n" + - " a decimal, \n" + - " b decimal(10, 0),\n" + - " c decimal(38, 38),\n" + + " a ARRAY, \n" + + " b MAP,\n" + + " c ROW,\n" + + " d MULTISET,\n" + " PRIMARY KEY (a, b) \n" + ") with (\n" + " x = 'y', \n" + " asd = 'data'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + - " `A` DECIMAL,\n" + - " `B` DECIMAL(10, 0),\n" + - " `C` DECIMAL(38, 38),\n" + + " `A` ARRAY< BIGINT >,\n" + + " `B` MAP< INTEGER, VARCHAR >,\n" + + " `C` ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR >,\n" + + " `D` MULTISET< VARCHAR >,\n" + Review comment: Also SQL Server has such behavior: https://docs.microsoft.com/de-de/sql/t-sql/data-types/char-and-varchar-transact-sql?view=sql-server-2017 "When n isn't specified in a data definition or variable declaration statement, the default length is 1." This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309778088 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() { } @Test - public void testCreateTableWithDecimalType() { + public void testCreateTableWithComplexType() { check("CREATE TABLE tbl1 (\n" + - " a decimal, \n" + - " b decimal(10, 0),\n" + - " c decimal(38, 38),\n" + + " a ARRAY, \n" + + " b MAP,\n" + + " c ROW,\n" + + " d MULTISET,\n" + " PRIMARY KEY (a, b) \n" + ") with (\n" + " x = 'y', \n" + " asd = 'data'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + - " `A` DECIMAL,\n" + - " `B` DECIMAL(10, 0),\n" + - " `C` DECIMAL(38, 38),\n" + + " `A` ARRAY< BIGINT >,\n" + + " `B` MAP< INTEGER, VARCHAR >,\n" + + " `C` ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR >,\n" + + " `D` MULTISET< VARCHAR >,\n" + Review comment: According to the SQL standard section 6.1 clause (5): "If is omitted, then a of 1 (one) is implicit." So we should encourage users to use `STRING` instead to prevent unwanted side effects. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309753253 ## File path: flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl ## @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) : } } +SqlIdentifier FlinkCollectionsTypeName() : +{ +} +{ +LOOKAHEAD(2) + { +return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos()); +} +| + { +return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos()); +} +} + +SqlIdentifier FlinkTypeName() : +{ +final SqlTypeName sqlTypeName; +final SqlIdentifier typeName; +final Span s = Span.of(); +} +{ +( +<#-- additional types are included here --> +<#-- make custom data types in front of Calcite core data types --> +<#list parser.flinkDataTypeParserMethods as method> +<#if (method?index > 0)> +| + +LOOKAHEAD(2) +typeName = ${method} + +| +LOOKAHEAD(2) +sqlTypeName = SqlTypeName(s) { +typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this)); +} +| +LOOKAHEAD(2) +typeName = FlinkCollectionsTypeName() +| +typeName = CompoundIdentifier() { +throw new ParseException("UDT in DDL is not supported yet."); Review comment: But this means that the parser is still not compatible with FLIP-37. So it is still not possible to execute `CAST(12 AS STRING)`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309179341 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/Fixture.java ## @@ -0,0 +1,115 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.type.SqlTypeName; + +import java.util.List; + +/** + * Type used during tests. + */ +public class Fixture { + private final RelDataTypeFactory typeFactory; + + final RelDataType char1Type; + final RelDataType char33Type; + final RelDataType varcharType; + final RelDataType varchar33Type; + final RelDataType booleanType; + final RelDataType binaryType; + final RelDataType binary33Type; + final RelDataType varbinaryType; + final RelDataType varbinary33Type; + final RelDataType decimalType; + final RelDataType decimalP10S0Type; + final RelDataType decimalP10S3Type; + final RelDataType tinyintType; + final RelDataType smallintType; + final RelDataType intType; + final RelDataType bigintType; + final RelDataType floatType; + final RelDataType doubleType; + final RelDataType dateType; + final RelDataType timeType; + final RelDataType time3Type; + final RelDataType timestampType; + final RelDataType timestamp3Type; + final RelDataType timestampWithLocalTimeZoneType; + final RelDataType timestamp3WithLocalTimeZoneType; + final RelDataType nullType; + + Fixture(RelDataTypeFactory typeFactory) { + this.typeFactory = typeFactory; + this.char1Type = typeFactory.createSqlType(SqlTypeName.CHAR); Review comment: Remove `1`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309182163 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/Fixture.java ## @@ -0,0 +1,115 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.type.SqlTypeName; + +import java.util.List; + +/** + * Type used during tests. + */ +public class Fixture { Review comment: The name of this class is not very useful. How about `RelDataTypeTestData`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309180440 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkDDLDataTypeTest.java ## @@ -0,0 +1,441 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.ddl.SqlCreateTable; +import org.apache.flink.sql.parser.ddl.SqlTableColumn; +import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl; +import org.apache.flink.sql.parser.validate.FlinkSqlConformance; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.Quoting; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.CalciteSqlDialect; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.SqlParserTest; +import org.apache.calcite.sql.parser.SqlParserUtil; +import org.apache.calcite.sql.pretty.SqlPrettyWriter; +import org.apache.calcite.sql.type.SqlTypeFactoryImpl; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.test.SqlValidatorTestCase; +import org.apache.calcite.util.Util; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests for all the sup[ported flink DDL data types. + */ +@RunWith(Parameterized.class) +public class FlinkDDLDataTypeTest { Review comment: Instead of prefixing it with Flink. You could just call this test `SqlDataTypeParserTest`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309191464 ## File path: flink-table/flink-table-planner/src/test/scala/org/apache/flink/table/runtime/stream/TimeAttributesITCase.scala ## @@ -512,7 +512,7 @@ class TimeAttributesITCase extends AbstractTestBase { .assignTimestampsAndWatermarks(new TimestampWithEqualWatermark()) val table = stream.toTable(tEnv, 'rowtime.rowtime, 'int, 'double, 'float, 'bigdec, 'string) tEnv.registerTable("T1", table) -val querySql = "select rowtime as ts, string as msg from T1" +val querySql = "select rowtime as ts, `string` as msg from T1" Review comment: Update the list of keywords in `dev/table/sql.html#reserved-keywords`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309190370 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -362,19 +362,19 @@ public void testCreateTableWithWatermarkStrategyFromSource() { } @Test - public void testCreateTableWithComplexType() { + public void testCreateTableWithDecimalType() { Review comment: Shall we drop this test. Just testing decimal seems not useful given we have a `testCreateTableWithComplexType` already? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309173413 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/FlinkSqlDataTypeSpec.java ## @@ -0,0 +1,325 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.type.ExtendedSqlType; +import org.apache.flink.sql.parser.type.SqlArrayType; +import org.apache.flink.sql.parser.type.SqlBytesType; +import org.apache.flink.sql.parser.type.SqlMapType; +import org.apache.flink.sql.parser.type.SqlMultiSetType; +import org.apache.flink.sql.parser.type.SqlRowType; +import org.apache.flink.sql.parser.type.SqlStringType; +import org.apache.flink.sql.parser.type.SqlZonedTimeType; +import org.apache.flink.sql.parser.type.SqlZonedTimestampType; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.SqlCollation; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.sql.type.SqlTypeUtil; +import org.apache.calcite.util.Util; + +import java.nio.charset.Charset; +import java.util.Objects; +import java.util.TimeZone; +import java.util.stream.Collectors; + +/** + * Represents a SQL data type specification in a parse tree. + * + * A SqlDataTypeSpec is immutable; once created, you cannot + * change any of the fields. + * + * This class is an extension to {@link SqlDataTypeSpec}, we support + * complex type expressions like: + * + * ROW( + * NUMBER(5, 2) NOT NULL AS foo, + * ROW(BOOLEAN AS b, MyUDT NOT NULL AS i) AS rec) + * + * Until https://issues.apache.org/jira/browse/CALCITE-3213";>CALCITE-3213 + * is resolved, we can remove this class. + */ +public class FlinkSqlDataTypeSpec extends SqlDataTypeSpec { + // Flag saying if the element type is nullable if this type is a collection type. + // For collection type, we mean ARRAY and MULTISET type now. Review comment: and what about maps? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309183894 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkDDLDataTypeTest.java ## @@ -0,0 +1,441 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.ddl.SqlCreateTable; +import org.apache.flink.sql.parser.ddl.SqlTableColumn; +import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl; +import org.apache.flink.sql.parser.validate.FlinkSqlConformance; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.Quoting; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.CalciteSqlDialect; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.SqlParserTest; +import org.apache.calcite.sql.parser.SqlParserUtil; +import org.apache.calcite.sql.pretty.SqlPrettyWriter; +import org.apache.calcite.sql.type.SqlTypeFactoryImpl; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.test.SqlValidatorTestCase; +import org.apache.calcite.util.Util; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests for all the sup[ported flink DDL data types. + */ +@RunWith(Parameterized.class) +public class FlinkDDLDataTypeTest { + private FlinkSqlConformance conformance = FlinkSqlConformance.DEFAULT; + private static final RelDataTypeFactory TYPE_FACTORY = + new SqlTypeFactoryImpl(RelDataTypeSystem.DEFAULT); + private static final Fixture FIXTURE = new Fixture(TYPE_FACTORY); + private static final String DDL_FORMAT = "create table t1 (\n" + + " f0 %s\n" + + ") with (\n" + + " k1 = 'v1'\n" + + ")"; + + @Parameterized.Parameters(name = "{index}: {0}") + public static List testData() { + return Arrays.asList( + createTestItem("CHAR", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR NOT NULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NOT \t\nNULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("char not null", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NULL", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR(33)", nullable(FIXTURE.char33Type), "CHAR(33)"), + createTestItem("VARCHAR", nullable(FIXTURE.varcharType), "VARCHAR"), + createTestItem("VARCHAR(33)", nullable(FIXTURE.varchar33Type), "VARCHAR(33)"), + createTestItem("STRING", + nullable(FIXTURE.createSqlType(SqlTypeName.VARCHAR, Integer.MAX_VALUE)), "STRING"), + createTestItem("BOOLEAN", nullable(FIXTURE.booleanType), "BOOLEAN"), + createTestItem("BINARY", nullable(FIXTURE.binaryType), "BINARY"), + createTestItem("BINARY(33)", nullable(FIXTURE.binary33Type), "BINARY(33)"), + createTestItem("VARBINARY", nullable(FIXTURE.varbinaryType), "VARBINARY"), + createTestItem("VARBINARY(33)", nullable(FIXTURE.varbinary33Type), + "VARBINARY(33)"), + createTestItem("BYTES", + nulla
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309180938 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkDDLDataTypeTest.java ## @@ -0,0 +1,441 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.ddl.SqlCreateTable; +import org.apache.flink.sql.parser.ddl.SqlTableColumn; +import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl; +import org.apache.flink.sql.parser.validate.FlinkSqlConformance; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.Quoting; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.CalciteSqlDialect; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.SqlParserTest; +import org.apache.calcite.sql.parser.SqlParserUtil; +import org.apache.calcite.sql.pretty.SqlPrettyWriter; +import org.apache.calcite.sql.type.SqlTypeFactoryImpl; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.test.SqlValidatorTestCase; +import org.apache.calcite.util.Util; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests for all the sup[ported flink DDL data types. + */ +@RunWith(Parameterized.class) +public class FlinkDDLDataTypeTest { + private FlinkSqlConformance conformance = FlinkSqlConformance.DEFAULT; + private static final RelDataTypeFactory TYPE_FACTORY = + new SqlTypeFactoryImpl(RelDataTypeSystem.DEFAULT); + private static final Fixture FIXTURE = new Fixture(TYPE_FACTORY); + private static final String DDL_FORMAT = "create table t1 (\n" + + " f0 %s\n" + + ") with (\n" + + " k1 = 'v1'\n" + + ")"; + + @Parameterized.Parameters(name = "{index}: {0}") + public static List testData() { + return Arrays.asList( + createTestItem("CHAR", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR NOT NULL", FIXTURE.char1Type, "CHAR NOT NULL"), Review comment: This test does not distinguish between nullable and non-nullable type. So it just swallows the information? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309175903 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/type/SqlRowType.java ## @@ -51,6 +55,10 @@ public SqlRowType(SqlParserPos pos, return fieldTypes; } + public List getComments() { Review comment: This method is not used. That means we are dropping comments for now, right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309184547 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkDDLDataTypeTest.java ## @@ -0,0 +1,441 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.ddl.SqlCreateTable; +import org.apache.flink.sql.parser.ddl.SqlTableColumn; +import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl; +import org.apache.flink.sql.parser.validate.FlinkSqlConformance; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.Quoting; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.CalciteSqlDialect; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.SqlParserTest; +import org.apache.calcite.sql.parser.SqlParserUtil; +import org.apache.calcite.sql.pretty.SqlPrettyWriter; +import org.apache.calcite.sql.type.SqlTypeFactoryImpl; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.test.SqlValidatorTestCase; +import org.apache.calcite.util.Util; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests for all the sup[ported flink DDL data types. + */ +@RunWith(Parameterized.class) +public class FlinkDDLDataTypeTest { + private FlinkSqlConformance conformance = FlinkSqlConformance.DEFAULT; + private static final RelDataTypeFactory TYPE_FACTORY = + new SqlTypeFactoryImpl(RelDataTypeSystem.DEFAULT); + private static final Fixture FIXTURE = new Fixture(TYPE_FACTORY); + private static final String DDL_FORMAT = "create table t1 (\n" + + " f0 %s\n" + + ") with (\n" + + " k1 = 'v1'\n" + + ")"; + + @Parameterized.Parameters(name = "{index}: {0}") + public static List testData() { + return Arrays.asList( + createTestItem("CHAR", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR NOT NULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NOT \t\nNULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("char not null", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NULL", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR(33)", nullable(FIXTURE.char33Type), "CHAR(33)"), + createTestItem("VARCHAR", nullable(FIXTURE.varcharType), "VARCHAR"), + createTestItem("VARCHAR(33)", nullable(FIXTURE.varchar33Type), "VARCHAR(33)"), + createTestItem("STRING", + nullable(FIXTURE.createSqlType(SqlTypeName.VARCHAR, Integer.MAX_VALUE)), "STRING"), + createTestItem("BOOLEAN", nullable(FIXTURE.booleanType), "BOOLEAN"), + createTestItem("BINARY", nullable(FIXTURE.binaryType), "BINARY"), + createTestItem("BINARY(33)", nullable(FIXTURE.binary33Type), "BINARY(33)"), + createTestItem("VARBINARY", nullable(FIXTURE.varbinaryType), "VARBINARY"), + createTestItem("VARBINARY(33)", nullable(FIXTURE.varbinary33Type), + "VARBINARY(33)"), + createTestItem("BYTES", + nulla
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309191034 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkSqlParserImplTest.java ## @@ -383,19 +383,21 @@ public void testCreateTableWithComplexType() { } @Test - public void testCreateTableWithDecimalType() { + public void testCreateTableWithComplexType() { check("CREATE TABLE tbl1 (\n" + - " a decimal, \n" + - " b decimal(10, 0),\n" + - " c decimal(38, 38),\n" + + " a ARRAY, \n" + + " b MAP,\n" + + " c ROW,\n" + + " d MULTISET,\n" + " PRIMARY KEY (a, b) \n" + ") with (\n" + " x = 'y', \n" + " asd = 'data'\n" + ")\n", "CREATE TABLE `TBL1` (\n" + - " `A` DECIMAL,\n" + - " `B` DECIMAL(10, 0),\n" + - " `C` DECIMAL(38, 38),\n" + + " `A` ARRAY< BIGINT >,\n" + + " `B` MAP< INTEGER, VARCHAR >,\n" + + " `C` ROW< `CC0` INTEGER, `CC1` FLOAT, `CC2` VARCHAR >,\n" + + " `D` MULTISET< VARCHAR >,\n" + Review comment: `STRING` instead of `VARCHAR`? We should encourage this type everywhere (e.g. update also the DDL doc in table environments), because `VARCHAR` is equal to `VARCHAR(1)` in SQL standard. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309173062 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/FlinkSqlDataTypeSpec.java ## @@ -0,0 +1,325 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.type.ExtendedSqlType; +import org.apache.flink.sql.parser.type.SqlArrayType; +import org.apache.flink.sql.parser.type.SqlBytesType; +import org.apache.flink.sql.parser.type.SqlMapType; +import org.apache.flink.sql.parser.type.SqlMultiSetType; +import org.apache.flink.sql.parser.type.SqlRowType; +import org.apache.flink.sql.parser.type.SqlStringType; +import org.apache.flink.sql.parser.type.SqlZonedTimeType; +import org.apache.flink.sql.parser.type.SqlZonedTimestampType; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.SqlCollation; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.sql.type.SqlTypeUtil; +import org.apache.calcite.util.Util; + +import java.nio.charset.Charset; +import java.util.Objects; +import java.util.TimeZone; +import java.util.stream.Collectors; + +/** + * Represents a SQL data type specification in a parse tree. + * + * A SqlDataTypeSpec is immutable; once created, you cannot + * change any of the fields. + * + * This class is an extension to {@link SqlDataTypeSpec}, we support + * complex type expressions like: + * + * ROW( + * NUMBER(5, 2) NOT NULL AS foo, Review comment: why `AS`? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309172263 ## File path: flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl ## @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) : } } +SqlIdentifier FlinkCollectionsTypeName() : +{ +} +{ +LOOKAHEAD(2) + { +return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos()); +} +| + { +return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos()); +} +} + +SqlIdentifier FlinkTypeName() : +{ +final SqlTypeName sqlTypeName; +final SqlIdentifier typeName; +final Span s = Span.of(); +} +{ +( +<#-- additional types are included here --> +<#-- make custom data types in front of Calcite core data types --> +<#list parser.flinkDataTypeParserMethods as method> +<#if (method?index > 0)> +| + +LOOKAHEAD(2) +typeName = ${method} + +| +LOOKAHEAD(2) +sqlTypeName = SqlTypeName(s) { +typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this)); +} +| +LOOKAHEAD(2) +typeName = FlinkCollectionsTypeName() +| +typeName = CompoundIdentifier() { +throw new ParseException("UDT in DDL is not supported yet."); +} +) +{ +return typeName; +} +} + +/** +* Parse a Flink data type with nullable options, NULL -> nullable, NOT NULL -> not nullable. +* Default to be nullable. +*/ +SqlDataTypeSpec FlinkDataType() : +{ +final SqlIdentifier typeName; +SqlIdentifier collectionTypeName = null; +int scale = -1; +int precision = -1; +String charSetName = null; +final Span s; +boolean nullable = true; +boolean elementNullable = true; +} +{ +typeName = FlinkTypeName() { +s = span(); +} +[ + +precision = UnsignedIntLiteral() +[ + +scale = UnsignedIntLiteral() +] + +] +[ + +charSetName = Identifier() +] +elementNullable = NullableOpt() +[ +collectionTypeName = FlinkCollectionsTypeName() +nullable = NullableOpt() +] +{ +if (null != collectionTypeName) { +return new FlinkSqlDataTypeSpec( +collectionTypeName, +typeName, +precision, +scale, +charSetName, +nullable, +elementNullable, +s.end(collectionTypeName)); +} +nullable = elementNullable; +return new FlinkSqlDataTypeSpec(typeName, +precision, +scale, +charSetName, +null, +nullable, +elementNullable, +s.end(this)); +} +} + +SqlIdentifier SqlStringType() : +{ +} +{ + { return new SqlStringType(getPos()); } +} + +SqlIdentifier SqlBytesType() : +{ +} +{ + { return new SqlBytesType(getPos()); } +} + +boolean WithLocalTimeZone() : +{ +} +{ + { return false; } +| + +( +{ return true; } +| + { +throw new ParseException("'WITH TIME ZONE' is not supported yet, options: " + +"'WITHOUT TIME ZONE', 'WITH LOCAL TIME ZONE'."); +} +) +| +{ return false; } +} + +SqlIdentifier SqlZonedTimeType() : +{ +int precision = -1; +boolean withLocalTimeZone = false; +} +{ + +( + precision = UnsignedIntLiteral() +| +{ precision = -1; } +) +withLocalTimeZone = WithLocalTimeZone() +{ return new SqlZonedTimeType(getPos(), precision, withLocalTimeZone); } +} + +SqlIdentifier SqlZonedTimestampType() : +{ +int precision = -1; +boolean withLocalTimeZone = false; +} +{ + +( + precision = UnsignedIntLiteral() +| +{ precision = -1; } +) +withLocalTimeZone = WithLocalTimeZone() +{ return new SqlZonedTimestampType(getPos(), precision, withLocalTimeZone); } +} + SqlIdentifier SqlArrayType() : { SqlParserPos pos; SqlDataTypeSpec elementType; +boolean nullable = true; } { { pos = getPos(); } - elementType = DataType() + +elementType = FlinkDataType() { return new SqlArrayType(pos, elementType); } } +SqlIdentifier SqlMultiSetType() : +{ +SqlParserPos pos; +SqlDataTypeSpec elementType; +boolean nullable = true; +} +{ + { pos = getPos(); } + +elementType = FlinkDataType() + +{ +return new SqlMultiSetType(pos, elementType); +} +} + SqlIdentifier SqlMapType() : { SqlDataTypeSpec keyType; SqlDataTypeSpec valType; +boolean nullable = true; } { - keyType = DataType() - valType = DataType() + +key
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309176530 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/type/SqlZonedTimeType.java ## @@ -0,0 +1,73 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser.type; + +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; + +/** + * Parse type "TIME WITHOUT TIME ZONE", "TIME(3) WITHOUT TIME ZONE", "TIME WITH LOCAL TIME ZONE", + * or "TIME(3) WITH LOCAL TIME ZONE". + */ +public class SqlZonedTimeType extends SqlIdentifier implements ExtendedSqlType { Review comment: Why is this class called `SqlZonedTimeType` if it also handles timezone-less types and local time zones. How about splitting it into `SqlZonedTimeType`, `SqlLocalZonedTimeType` and `SqlTimeType` similar to the naming of Flink's logical types. Or just `SqlTimeType` if splitting would be too complicated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309169277 ## File path: flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd ## @@ -31,10 +31,16 @@ "org.apache.flink.sql.parser.dml.RichSqlInsert", "org.apache.flink.sql.parser.dml.RichSqlInsertKeyword", "org.apache.flink.sql.parser.type.SqlArrayType", +"org.apache.flink.sql.parser.type.SqlBytesType", "org.apache.flink.sql.parser.type.SqlMapType", +"org.apache.flink.sql.parser.type.SqlMultiSetType", Review comment: nit: should be `SqlMultisetType` for consistency This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309181743 ## File path: flink-table/flink-sql-parser/src/test/java/org/apache/flink/sql/parser/FlinkDDLDataTypeTest.java ## @@ -0,0 +1,441 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.ddl.SqlCreateTable; +import org.apache.flink.sql.parser.ddl.SqlTableColumn; +import org.apache.flink.sql.parser.impl.FlinkSqlParserImpl; +import org.apache.flink.sql.parser.validate.FlinkSqlConformance; + +import org.apache.calcite.avatica.util.Casing; +import org.apache.calcite.avatica.util.Quoting; +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.rel.type.RelDataTypeSystem; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlDialect; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlNodeList; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.dialect.CalciteSqlDialect; +import org.apache.calcite.sql.parser.SqlParseException; +import org.apache.calcite.sql.parser.SqlParser; +import org.apache.calcite.sql.parser.SqlParserTest; +import org.apache.calcite.sql.parser.SqlParserUtil; +import org.apache.calcite.sql.pretty.SqlPrettyWriter; +import org.apache.calcite.sql.type.SqlTypeFactoryImpl; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.test.SqlValidatorTestCase; +import org.apache.calcite.util.Util; +import org.junit.Test; +import org.junit.runner.RunWith; +import org.junit.runners.Parameterized; + +import javax.annotation.Nullable; + +import java.util.Arrays; +import java.util.Collections; +import java.util.List; + +import static org.junit.Assert.assertEquals; + +/** + * Tests for all the sup[ported flink DDL data types. + */ +@RunWith(Parameterized.class) +public class FlinkDDLDataTypeTest { + private FlinkSqlConformance conformance = FlinkSqlConformance.DEFAULT; + private static final RelDataTypeFactory TYPE_FACTORY = + new SqlTypeFactoryImpl(RelDataTypeSystem.DEFAULT); + private static final Fixture FIXTURE = new Fixture(TYPE_FACTORY); + private static final String DDL_FORMAT = "create table t1 (\n" + + " f0 %s\n" + + ") with (\n" + + " k1 = 'v1'\n" + + ")"; + + @Parameterized.Parameters(name = "{index}: {0}") + public static List testData() { + return Arrays.asList( + createTestItem("CHAR", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR NOT NULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NOT \t\nNULL", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("char not null", FIXTURE.char1Type, "CHAR NOT NULL"), + createTestItem("CHAR NULL", nullable(FIXTURE.char1Type), "CHAR"), + createTestItem("CHAR(33)", nullable(FIXTURE.char33Type), "CHAR(33)"), + createTestItem("VARCHAR", nullable(FIXTURE.varcharType), "VARCHAR"), + createTestItem("VARCHAR(33)", nullable(FIXTURE.varchar33Type), "VARCHAR(33)"), + createTestItem("STRING", + nullable(FIXTURE.createSqlType(SqlTypeName.VARCHAR, Integer.MAX_VALUE)), "STRING"), Review comment: Add this to `FIXTURE` as well? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309174185 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/FlinkSqlDataTypeSpec.java ## @@ -0,0 +1,325 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.type.ExtendedSqlType; +import org.apache.flink.sql.parser.type.SqlArrayType; +import org.apache.flink.sql.parser.type.SqlBytesType; +import org.apache.flink.sql.parser.type.SqlMapType; +import org.apache.flink.sql.parser.type.SqlMultiSetType; +import org.apache.flink.sql.parser.type.SqlRowType; +import org.apache.flink.sql.parser.type.SqlStringType; +import org.apache.flink.sql.parser.type.SqlZonedTimeType; +import org.apache.flink.sql.parser.type.SqlZonedTimestampType; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.SqlCollation; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.sql.type.SqlTypeUtil; +import org.apache.calcite.util.Util; + +import java.nio.charset.Charset; +import java.util.Objects; +import java.util.TimeZone; +import java.util.stream.Collectors; + +/** + * Represents a SQL data type specification in a parse tree. + * + * A SqlDataTypeSpec is immutable; once created, you cannot + * change any of the fields. + * + * This class is an extension to {@link SqlDataTypeSpec}, we support + * complex type expressions like: + * + * ROW( + * NUMBER(5, 2) NOT NULL AS foo, + * ROW(BOOLEAN AS b, MyUDT NOT NULL AS i) AS rec) + * + * Until https://issues.apache.org/jira/browse/CALCITE-3213";>CALCITE-3213 + * is resolved, we can remove this class. + */ +public class FlinkSqlDataTypeSpec extends SqlDataTypeSpec { + // Flag saying if the element type is nullable if this type is a collection type. + // For collection type, we mean ARRAY and MULTISET type now. + private Boolean elementNullable; + + public FlinkSqlDataTypeSpec( + SqlIdentifier collectionsTypeName, + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(collectionsTypeName, typeName, precision, scale, + charSetName, null, nullable, pos); + this.elementNullable = elementNullable; + } + + public FlinkSqlDataTypeSpec( + SqlIdentifier collectionsTypeName, + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + TimeZone timeZone, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(collectionsTypeName, typeName, precision, scale, + charSetName, timeZone, nullable, pos); + this.elementNullable = elementNullable; + } + + public FlinkSqlDataTypeSpec( + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + TimeZone timeZone, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(null, typeName, precision, scale, + charSetName, timeZone, nullable, pos); + this.elementNullable = elementNullable; + } + + @Override + public SqlNode clone(SqlParserPos pos) { + return (getCollectionsTypeName() != null) + ? new FlinkSqlDataTypeSpec(getCollectionsTypeName(), getTypeName(), ge
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309170046 ## File path: flink-table/flink-sql-parser/src/main/codegen/data/Parser.tdd ## @@ -384,13 +392,24 @@ literalParserMethods: [ ] - # List of methods for parsing custom data types. + # List of methods for parsing flink custom data types. # Return type of method implementation should be "SqlIdentifier". # Example: SqlParseTimeStampZ(). - dataTypeParserMethods: [ + flinkDataTypeParserMethods: [ Review comment: nit: I would remove the `flink` prefix here and in the rest of the PR. Everything is Flink in this code base. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309173603 ## File path: flink-table/flink-sql-parser/src/main/java/org/apache/flink/sql/parser/FlinkSqlDataTypeSpec.java ## @@ -0,0 +1,325 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.flink.sql.parser; + +import org.apache.flink.sql.parser.type.ExtendedSqlType; +import org.apache.flink.sql.parser.type.SqlArrayType; +import org.apache.flink.sql.parser.type.SqlBytesType; +import org.apache.flink.sql.parser.type.SqlMapType; +import org.apache.flink.sql.parser.type.SqlMultiSetType; +import org.apache.flink.sql.parser.type.SqlRowType; +import org.apache.flink.sql.parser.type.SqlStringType; +import org.apache.flink.sql.parser.type.SqlZonedTimeType; +import org.apache.flink.sql.parser.type.SqlZonedTimestampType; + +import org.apache.calcite.rel.type.RelDataType; +import org.apache.calcite.rel.type.RelDataTypeFactory; +import org.apache.calcite.sql.SqlCollation; +import org.apache.calcite.sql.SqlDataTypeSpec; +import org.apache.calcite.sql.SqlIdentifier; +import org.apache.calcite.sql.SqlNode; +import org.apache.calcite.sql.SqlUtil; +import org.apache.calcite.sql.SqlWriter; +import org.apache.calcite.sql.parser.SqlParserPos; +import org.apache.calcite.sql.type.SqlTypeName; +import org.apache.calcite.sql.type.SqlTypeUtil; +import org.apache.calcite.util.Util; + +import java.nio.charset.Charset; +import java.util.Objects; +import java.util.TimeZone; +import java.util.stream.Collectors; + +/** + * Represents a SQL data type specification in a parse tree. + * + * A SqlDataTypeSpec is immutable; once created, you cannot + * change any of the fields. + * + * This class is an extension to {@link SqlDataTypeSpec}, we support + * complex type expressions like: + * + * ROW( + * NUMBER(5, 2) NOT NULL AS foo, + * ROW(BOOLEAN AS b, MyUDT NOT NULL AS i) AS rec) + * + * Until https://issues.apache.org/jira/browse/CALCITE-3213";>CALCITE-3213 + * is resolved, we can remove this class. + */ +public class FlinkSqlDataTypeSpec extends SqlDataTypeSpec { + // Flag saying if the element type is nullable if this type is a collection type. + // For collection type, we mean ARRAY and MULTISET type now. + private Boolean elementNullable; + + public FlinkSqlDataTypeSpec( + SqlIdentifier collectionsTypeName, + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(collectionsTypeName, typeName, precision, scale, + charSetName, null, nullable, pos); + this.elementNullable = elementNullable; + } + + public FlinkSqlDataTypeSpec( + SqlIdentifier collectionsTypeName, + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + TimeZone timeZone, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(collectionsTypeName, typeName, precision, scale, + charSetName, timeZone, nullable, pos); + this.elementNullable = elementNullable; + } + + public FlinkSqlDataTypeSpec( + SqlIdentifier typeName, + int precision, + int scale, + String charSetName, + TimeZone timeZone, + Boolean nullable, + Boolean elementNullable, + SqlParserPos pos) { + super(null, typeName, precision, scale, + charSetName, timeZone, nullable, pos); + this.elementNullable = elementNullable; + } + + @Override + public SqlNode clone(SqlParserPos pos) { + return (getCollectionsTypeName() != null) + ? new FlinkSqlDataTypeSpec(getCollectionsTypeName(), getTypeName(), ge
[GitHub] [flink] twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37
twalthr commented on a change in pull request #9243: [FLINK-13335][sql-parser] Align the SQL CREATE TABLE DDL with FLIP-37 URL: https://github.com/apache/flink/pull/9243#discussion_r309171226 ## File path: flink-table/flink-sql-parser/src/main/codegen/includes/parserImpls.ftl ## @@ -367,53 +375,299 @@ SqlDrop SqlDropView(Span s, boolean replace) : } } +SqlIdentifier FlinkCollectionsTypeName() : +{ +} +{ +LOOKAHEAD(2) + { +return new SqlIdentifier(SqlTypeName.MULTISET.name(), getPos()); +} +| + { +return new SqlIdentifier(SqlTypeName.ARRAY.name(), getPos()); +} +} + +SqlIdentifier FlinkTypeName() : +{ +final SqlTypeName sqlTypeName; +final SqlIdentifier typeName; +final Span s = Span.of(); +} +{ +( +<#-- additional types are included here --> +<#-- make custom data types in front of Calcite core data types --> +<#list parser.flinkDataTypeParserMethods as method> +<#if (method?index > 0)> +| + +LOOKAHEAD(2) +typeName = ${method} + +| +LOOKAHEAD(2) +sqlTypeName = SqlTypeName(s) { +typeName = new SqlIdentifier(sqlTypeName.name(), s.end(this)); +} +| +LOOKAHEAD(2) +typeName = FlinkCollectionsTypeName() +| +typeName = CompoundIdentifier() { +throw new ParseException("UDT in DDL is not supported yet."); Review comment: I think this method part is also used by non-DDL queries. Rename message to "User-defined types are not supported by the parser yet."? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services