Thanks all for the discussion.
Since we reach the consensus about abandoning the fall back behavior for Hive 
dialect, if there's no other concern or objection, I'll close this discussion 
tomorrow(6/9).


Best regards,
Yuxia

----- 原始邮件 -----
发件人: "Martijn Visser" <martijnvis...@apache.org>
收件人: "dev" <dev@flink.apache.org>
发送时间: 星期一, 2023年 6 月 05日 下午 8:46:14
主题: Re: [DISCUSS] Hive dialect shouldn't fall back to Flink's default dialect

+1 for anything that helps us with externalizing the Hive connector :D

On Thu, Jun 1, 2023 at 3:34 PM Lincoln Lee <lincoln.8...@gmail.com> wrote:

> +1, thanks yuxia for driving the hive decoupling work!
> Since the 1.16 release, the compatibility of Hive queries has reached a
> relatively high level, so it is time to abandon the internal fallback,
> which will make the behavior of the Hive dialect clearer.
>
> Best,
> Lincoln Lee
>
>
> Jark Wu <imj...@gmail.com> 于2023年6月1日周四 21:23写道:
>
> > +1, I think this can make the grammar more clear.
> > Please remember to add a release note once the issue is finished.
> >
> > Best,
> > Jark
> >
> > On Thu, 1 Jun 2023 at 11:28, yuxia <luoyu...@alumni.sjtu.edu.cn> wrote:
> >
> > > Hi, Jingsong. It's hard to provide an option regarding to the fact that
> > we
> > > also want to decouple Hive with flink planner.
> > > If we still need this fall back behavior, we will still depend on
> > > `ParserImpl` provided by flink-table-planner  on HiveParser.
> > > But to try best to minimize the impact to users and more user-friendly,
> > > I'll remind users may use set table.sql-dialect = default to switch to
> > > Flink's default dialect in error message when fail to parse the sql in
> > > HiveParser.
> > >
> > > Best regards,
> > > Yuxia
> > >
> > > Best regards,
> > > Yuxia
> > >
> > > ----- 原始邮件 -----
> > > 发件人: "Jingsong Li" <jingsongl...@gmail.com>
> > > 收件人: "Rui Li" <lirui.fu...@gmail.com>
> > > 抄送: "dev" <dev@flink.apache.org>, "yuxia" <luoyu...@alumni.sjtu.edu.cn
> >,
> > > "User" <u...@flink.apache.org>
> > > 发送时间: 星期二, 2023年 5 月 30日 下午 3:21:56
> > > 主题: Re: [DISCUSS] Hive dialect shouldn't fall back to Flink's default
> > > dialect
> > >
> > > +1, the fallback looks weird now, it is outdated.
> > >
> > > But, it is good to provide an option. I don't know if there are some
> > > users who depend on this fallback.
> > >
> > > Best,
> > > Jingsong
> > >
> > > On Tue, May 30, 2023 at 1:47 PM Rui Li <lirui.fu...@gmail.com> wrote:
> > > >
> > > > +1, the fallback was just intended as a temporary workaround to run
> > > catalog/module related statements with hive dialect.
> > > >
> > > > On Mon, May 29, 2023 at 3:59 PM Benchao Li <libenc...@apache.org>
> > wrote:
> > > >>
> > > >> Big +1 on this, thanks yuxia for driving this!
> > > >>
> > > >> yuxia <luoyu...@alumni.sjtu.edu.cn> 于2023年5月29日周一 14:55写道:
> > > >>
> > > >> > Hi, community.
> > > >> >
> > > >> > I want to start the discussion about Hive dialect shouldn't fall
> > back
> > > to
> > > >> > Flink's default dialect.
> > > >> >
> > > >> > Currently, when the HiveParser fail to parse the sql in Hive
> > dialect,
> > > >> > it'll fall back to Flink's default parser[1] to handle
> > flink-specific
> > > >> > statements like "CREATE CATALOG xx with (xx);".
> > > >> >
> > > >> > As I‘m involving with Hive dialect and have some communication
> with
> > > >> > community users who use Hive dialectrecently,  I'm thinking throw
> > > exception
> > > >> > directly instead of falling back to Flink's default dialect when
> > fail
> > > to
> > > >> > parse the sql in Hive dialect
> > > >> >
> > > >> > Here're some reasons:
> > > >> >
> > > >> > First of all, it'll hide some error with Hive dialect. For
> example,
> > we
> > > >> > found we can't use Hive dialect any more with Flink sql client in
> > > release
> > > >> > validation phase[2], finally we find a modification in Flink sql
> > > client
> > > >> > cause it, but our test case can't find it earlier for although
> > > HiveParser
> > > >> > faill to parse it but then it'll fall back to default parser and
> > pass
> > > test
> > > >> > case successfully.
> > > >> >
> > > >> > Second, conceptually, Hive dialect should be do nothing with
> Flink's
> > > >> > default dialect. They are two totally different dialect. If we do
> > > need a
> > > >> > dialect mixing Hive dialect and default dialect , may be we need
> to
> > > propose
> > > >> > a new hybrid dialect and announce the hybrid behavior to users.
> > > >> > Also, It made some users confused for the fallback behavior. The
> > fact
> > > >> > comes from I had been ask by community users. Throw an excpetioin
> > > directly
> > > >> > when fail to parse the sql statement in Hive dialect will be more
> > > intuitive.
> > > >> >
> > > >> > Last but not least, it's import to decouple Hive with Flink
> > planner[3]
> > > >> > before we can externalize Hive connector[4]. If we still fall back
> > to
> > > Flink
> > > >> > default dialct, then we will need depend on `ParserImpl` in Flink
> > > planner,
> > > >> > which will block us removing the provided dependency of Hive
> dialect
> > > as
> > > >> > well as externalizing Hive connector.
> > > >> >
> > > >> > Although we hadn't announced the fall back behavior ever, but some
> > > users
> > > >> > may implicitly depend on this behavior in theirs sql jobs. So, I
> > > hereby
> > > >> > open the dicussion about abandoning the fall back behavior to make
> > > Hive
> > > >> > dialect clear and isoloted.
> > > >> > Please remember it won't break the Hive synatax but the syntax
> > > specified
> > > >> > to Flink may fail after then. But for the failed sql, you can use
> > `SET
> > > >> > table.sql-dialect=default;` to switch to Flink dialect.
> > > >> > If there's some flink-specific statements we found should be
> > included
> > > in
> > > >> > Hive dialect to be easy to use, I think we can still add them as
> > > specific
> > > >> > cases to Hive dialect.
> > > >> >
> > > >> > Look forwards to your feedback. I'd love to listen the feedback
> from
> > > >> > community to take the next steps.
> > > >> >
> > > >> > [1]:
> > > >> >
> > >
> >
> https://github.com/apache/flink/blob/678370b18e1b6c4a23e5ce08f8efd05675a0cc17/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/planner/delegation/hive/HiveParser.java#L348
> > > >> > [2]:https://issues.apache.org/jira/browse/FLINK-26681
> > > >> > [3]:https://issues.apache.org/jira/browse/FLINK-31413
> > > >> > [4]:https://issues.apache.org/jira/browse/FLINK-30064
> > > >> >
> > > >> >
> > > >> >
> > > >> > Best regards,
> > > >> > Yuxia
> > > >> >
> > > >>
> > > >>
> > > >> --
> > > >>
> > > >> Best,
> > > >> Benchao Li
> > > >
> > > >
> > > >
> > > > --
> > > > Best regards!
> > > > Rui Li
> > >
> >
>

Reply via email to