+ 1 for a new FLIP

It would be great if we could take this opportunity to go through all
current Flink SQL syntax and define a feasible classification in the FLIP.
In my opinion, the 4 categories, i.e. DDL, DML, DQL, DAL ,  used in Spark
doc [1] make sense.
We will then have a better big picture to know the gap between Flink and
other big data engines like Spark and what should be done.

Best regards,
Jing

[1]
https://spark.apache.org/docs/latest/sql-ref-syntax.html#auxiliary-statements

On Tue, Feb 21, 2023 at 1:15 PM Ran Tao <chucheng...@gmail.com> wrote:

> Thanks. I will create a FLIP to illustrate some details.
>
>
> Best Regards,
> Ran Tao
> https://github.com/chucheng92
>
>
> Jark Wu <imj...@gmail.com> 于2023年2月21日周二 20:03写道:
>
> > Thank you,
> >
> > I think this is worth a FLIP design doc to discuss the detailed syntax.
> > Could you prepare a FLIP for that?
> >
> > Best,
> > Jark
> >
> > On Tue, 21 Feb 2023 at 19:37, Ran Tao <chucheng...@gmail.com> wrote:
> >
> > > Hi, Jark. thanks. I have added a google doc.
> > >
> > >
> > >
> >
> https://docs.google.com/document/d/1hAiOfPx14VTBTOlpyxG7FA2mB1k5M31VnKYad2XpJ1I/edit?usp=sharing
> > >
> > > Jark Wu <imj...@gmail.com> 于2023年2月21日周二 19:27写道:
> > >
> > > > Hi Ran,
> > > >
> > > > I think it’s always nice to support new syntax if it’s useful for the
> > > > users.
> > > > From my side, your syntax table is broken. Could you share it with a
> > > > Google doc or create a JIRA issue?
> > > >
> > > > Best,
> > > > Jark
> > > >
> > > >
> > > >
> > > > > 2023年2月21日 17:51,Ran Tao <chucheng...@gmail.com> 写道:
> > > > >
> > > > > Hi guys. When I recently used flink sql to manage internal metadata
> > > > > (catalog/databases/table/functions),
> > > > > I found that many current flink sql statements do not support
> > filtering
> > > > or
> > > > > some advanced syntax, however these abilities are very useful to
> > > > end-users.
> > > > >
> > > > > These are some statements I have collected so far, which are
> > supported
> > > on
> > > > > other big data engines, such as spark, hive or presto. I wonder if
> we
> > > can
> > > > > support these abilities?
> > > > >
> > > > > In addition, the subject of this email is named 'Auxiliary
> > Statements'
> > > > > mainly because the alignment of these statements will not have much
> > > > impact
> > > > > on the core SQL runtime.
> > > > >
> > > > > Support or Not
> > > > >
> > > > > With Advanced Syntax Or Not (in/from or like)
> > > > >
> > > > > show create table
> > > > >
> > > > > Yes
> > > > >
> > > > > Yes
> > > > >
> > > > > show tables
> > > > >
> > > > > Yes
> > > > >
> > > > > Yes
> > > > >
> > > > > show columns
> > > > >
> > > > > Yes
> > > > >
> > > > > Yes
> > > > >
> > > > > show catalogs
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show databases
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show functions
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show views
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show modules
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show jars
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > show jobs
> > > > >
> > > > > Yes
> > > > >
> > > > > without filter
> > > > >
> > > > > We can see current flink many sql statements only support showing
> > with
> > > > full
> > > > > datas, without 'FROM/IN' or 'LIKE' filter clause.
> > > > >
> > > > > Support or Not
> > > > >
> > > > > describe database
> > > > >
> > > > > No
> > > > >
> > > > > describe table
> > > > >
> > > > > Yes
> > > > >
> > > > > describe function
> > > > >
> > > > > No
> > > > >
> > > > > describe query
> > > > >
> > > > > No
> > > > >
> > > > > current flink only supports describing tables.
> > > > >
> > > > > Also, please let me know if there is a mistake. Looking forward to
> > your
> > > > > reply.
> > > > >
> > > > >
> > > > > Best Regards,
> > > > > Ran Tao
> > > > > https://github.com/chucheng92
> > > >
> > > >
> > >
> >
>

Reply via email to