[
https://issues.apache.org/jira/browse/CALCITE-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
yanjing.wang updated CALCITE-4463:
----------------------------------
Description:
in SqlOrderBy$Operator class, the unparse method code "offset and fetch" hardly
and close the door to transform my sql to "limit x offset y style".
!image-2021-01-12-10-06-29-813.png!
my unparse sql code:
{code:java}
String sql = "select concat(a.id,'-',b.id) , a.name from xxx.bb limit 5";
SqlDialect SPARK = new
SparkSqlDialect(SqlDialect.EMPTY_CONTEXT
.withDatabaseProduct(SqlDialect.DatabaseProduct.SPARK)
.withIdentifierQuoteString("`").withNullCollation(NullCollation.LOW)
.withLiteralQuoteString("'").withLiteralEscapedQuoteString("''")
.withUnquotedCasing(Casing.UNCHANGED).withQuotedCasing(Casing.UNCHANGED));
SqlParser.Config configBuilder =
SqlParser.config()
.withParserFactory(SqlBabelParserImpl.FACTORY)
.withConformance(SqlConformanceEnum.LENIENT);
SqlParser sqlParser = SqlParser.create(sql, configBuilder);
try {
SqlNode sqlNode = sqlParser.parseQuery();
SqlString sqlString = sqlNode.toSqlString(SPARK);
System.out.println(sqlString);
} catch (SqlParseException e) {
e.printStackTrace();
}{code}
result:
{code:java}
SELECT `CONCAT`(`A`.`ID`, '-', `B`.`ID`), `A`.`NAME` FROM `XXX`.`BB` FETCH NEXT
5 ROWS ONLY
{code}
the "limit 5" clause shouldn't be transformed to "FETCH NEXT 5000 ROWS ONLY".
i dive into "parser.jj" file and find the following lines in "SqlSelect()"
production.
{code:java}
{ return new SqlSelect(s.end(this), keywordList, new SqlNodeList(selectList,
Span.of(selectList).pos()), fromClause, where, groupBy, having, windowDecls,
null, null, null, new SqlNodeList(hints, getPos())); }
{code}
the "SqlSelect" sql node always receive null orderby, offset and fetch. so the
"limit" clause will always be processed in "SqlOrderBy" sql node, and the
"unparse offset and fetch" can't be processed within specific sql dialect.
was:
in SqlOrderBy$Operator class, the unparse method code offset and fetch hardly
and close the door to transform my sql to limit x offset y style.
why doesn't it invoke dialect.unparseOffsetFetch like SqlSelectOperator?
!image-2021-01-12-10-06-29-813.png!
my unparse sql code:
{code:java}
String sql = "select concat(a.id,'-',b.id) , a.name from xxx.bb limit 5";
SqlDialect SPARK = new
SparkSqlDialect(SqlDialect.EMPTY_CONTEXT
.withDatabaseProduct(SqlDialect.DatabaseProduct.SPARK)
.withIdentifierQuoteString("`").withNullCollation(NullCollation.LOW)
.withLiteralQuoteString("'").withLiteralEscapedQuoteString("''")
.withUnquotedCasing(Casing.UNCHANGED).withQuotedCasing(Casing.UNCHANGED));
SqlParser.Config configBuilder =
SqlParser.config()
.withParserFactory(SqlBabelParserImpl.FACTORY)
.withConformance(SqlConformanceEnum.LENIENT);
SqlParser sqlParser = SqlParser.create(sql, configBuilder);
try {
SqlNode sqlNode = sqlParser.parseQuery();
SqlString sqlString = sqlNode.toSqlString(SPARK);
System.out.println(sqlString);
} catch (SqlParseException e) {
e.printStackTrace();
}{code}
result:
{code:java}
SELECT `CONCAT`(`A`.`ID`, '-', `B`.`ID`), `A`.`NAME` FROM `XXX`.`BB` FETCH NEXT
5 ROWS ONLY
{code}
what should i do if i want transform some other dialect sql to spark, because
spark doesn't support "FETCH NEXT 5000 ROWS ONLY"
Summary: dialect.unparseOffsetFetch method doesn't apply to
"SqlOrderBy" sql node (was: why doesn't dialect.unparseOffsetFetch method
apply to SqlOrderBy tree node)
> dialect.unparseOffsetFetch method doesn't apply to "SqlOrderBy" sql node
> ------------------------------------------------------------------------
>
> Key: CALCITE-4463
> URL: https://issues.apache.org/jira/browse/CALCITE-4463
> Project: Calcite
> Issue Type: Bug
> Components: core
> Affects Versions: 1.26.0
> Environment: jvm: open-jdk8
>
> calcite: 1.26.0
> Reporter: yanjing.wang
> Priority: Major
> Attachments: image-2021-01-12-10-06-29-813.png
>
>
> in SqlOrderBy$Operator class, the unparse method code "offset and fetch"
> hardly and close the door to transform my sql to "limit x offset y style".
>
> !image-2021-01-12-10-06-29-813.png!
>
> my unparse sql code:
> {code:java}
> String sql = "select concat(a.id,'-',b.id) , a.name from xxx.bb limit 5";
> SqlDialect SPARK = new
> SparkSqlDialect(SqlDialect.EMPTY_CONTEXT
> .withDatabaseProduct(SqlDialect.DatabaseProduct.SPARK)
> .withIdentifierQuoteString("`").withNullCollation(NullCollation.LOW)
> .withLiteralQuoteString("'").withLiteralEscapedQuoteString("''")
>
> .withUnquotedCasing(Casing.UNCHANGED).withQuotedCasing(Casing.UNCHANGED));
> SqlParser.Config configBuilder =
> SqlParser.config()
> .withParserFactory(SqlBabelParserImpl.FACTORY)
> .withConformance(SqlConformanceEnum.LENIENT);
> SqlParser sqlParser = SqlParser.create(sql, configBuilder);
> try {
> SqlNode sqlNode = sqlParser.parseQuery();
> SqlString sqlString = sqlNode.toSqlString(SPARK);
> System.out.println(sqlString);
> } catch (SqlParseException e) {
> e.printStackTrace();
> }{code}
> result:
> {code:java}
> SELECT `CONCAT`(`A`.`ID`, '-', `B`.`ID`), `A`.`NAME` FROM `XXX`.`BB` FETCH
> NEXT 5 ROWS ONLY
> {code}
> the "limit 5" clause shouldn't be transformed to "FETCH NEXT 5000 ROWS ONLY".
>
> i dive into "parser.jj" file and find the following lines in "SqlSelect()"
> production.
> {code:java}
> { return new SqlSelect(s.end(this), keywordList, new SqlNodeList(selectList,
> Span.of(selectList).pos()), fromClause, where, groupBy, having, windowDecls,
> null, null, null, new SqlNodeList(hints, getPos())); }
> {code}
>
> the "SqlSelect" sql node always receive null orderby, offset and fetch. so
> the "limit" clause will always be processed in "SqlOrderBy" sql node, and
> the "unparse offset and fetch" can't be processed within specific sql dialect.
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)