[ 
https://issues.apache.org/jira/browse/CALCITE-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17263096#comment-17263096
 ] 

Julian Hyde commented on CALCITE-4463:
--------------------------------------

The SQL {{FETCH}} and {{OFFSET}} clauses are parsed into a {{SqlOrderBy}} node, 
but often their values are pushed into a {{SqlSelect}}. Sometimes it's not 
possible (e.g. when {{SqlOrderBy}} is on top of a {{UNION}}) but I'm not sure 
why this is not happening in your case.

As you observed, {{SqlPrettyWriter.fetchOffset}} is called when unparsing 
{{SqlSelect}} but not when unparsing {{SqlOrderBy}}. That seems to be a bug. 
You should be able to write a simple test case for that, say a RelToSql test 
that has a UNION and a LIMIT and generates SQL for MySQL.

In this project, we don't use JIRA for questions. Please change the summary and 
description of this case so that it reads like a bug.

> why doesn't dialect.unparseOffsetFetch method apply to SqlOrderBy tree node
> ---------------------------------------------------------------------------
>
>                 Key: CALCITE-4463
>                 URL: https://issues.apache.org/jira/browse/CALCITE-4463
>             Project: Calcite
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.26.0
>         Environment: jvm: open-jdk8
>  
> calcite: 1.26.0
>            Reporter: yanjing.wang
>            Priority: Major
>         Attachments: image-2021-01-12-10-06-29-813.png
>
>
> in SqlOrderBy$Operator class, the unparse method code offset and fetch hardly 
> and close the door to transform my sql to limit x offset y style.
> why doesn't it invoke dialect.unparseOffsetFetch like SqlSelectOperator?
>  
> !image-2021-01-12-10-06-29-813.png!
>  
> my unparse sql code:
> {code:java}
> String sql = "select concat(a.id,'-',b.id) , a.name from xxx.bb limit 5";
> SqlDialect SPARK = new
>         SparkSqlDialect(SqlDialect.EMPTY_CONTEXT
>         .withDatabaseProduct(SqlDialect.DatabaseProduct.SPARK)
>         .withIdentifierQuoteString("`").withNullCollation(NullCollation.LOW)
>         .withLiteralQuoteString("'").withLiteralEscapedQuoteString("''")
>         
> .withUnquotedCasing(Casing.UNCHANGED).withQuotedCasing(Casing.UNCHANGED));
> SqlParser.Config configBuilder =
>         SqlParser.config()
>                 .withParserFactory(SqlBabelParserImpl.FACTORY)
>                 .withConformance(SqlConformanceEnum.LENIENT);
> SqlParser sqlParser = SqlParser.create(sql, configBuilder);
> try {
>     SqlNode sqlNode = sqlParser.parseQuery();
>     SqlString sqlString = sqlNode.toSqlString(SPARK);
>     System.out.println(sqlString);
> } catch (SqlParseException e) {
>     e.printStackTrace();
> }{code}
> result:
> {code:java}
> SELECT `CONCAT`(`A`.`ID`, '-', `B`.`ID`), `A`.`NAME` FROM `XXX`.`BB` FETCH 
> NEXT 5 ROWS ONLY
> {code}
> what should i do if i want transform some other dialect sql to spark, because 
> spark doesn't support "FETCH NEXT 5000 ROWS ONLY"
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to