Hi Weike and Tison,

This is already covered in FLIP-84 [1], we will propose a new method
"executeStatement(String statement)"
which can execute arbitrary statement including SET, CREATE. This is in the
progress [2].

Best,
Jark

[1]:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
[2]: https://issues.apache.org/jira/browse/FLINK-16366

On Mon, 9 Mar 2020 at 13:22, tison <wander4...@gmail.com> wrote:

> Hi Weike,
>
> Thanks for kicking off this discussion! I cannot agree more on the
> proposal for
> a universal sql() method. It confuses & annoys our users a lot to
> distinguish
> sqlUpdate/sqlQuery and even insertInto and so on.
>
> IIRC there is an ongoing FLIP[1] dealing with the problem. You can
> checkout to
> see if it fits into your requirements.
>
> Besides, for enabling SET in sql statement, I agree that it helps on
> consistent user
> experience using *just* SQL to describe their Flink job. Looking forward
> to maintainers'
> idea on the possibility & plan.
>
> Best,
> tison.
>
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
>
>
> DONG, Weike <kyled...@connect.hku.hk> 于2020年3月9日周一 下午12:46写道:
>
>> Hi dev,
>>
>> Recently we have tested the brand-new SQLClient and Flink SQL module, and
>> we are amazed at this simple way of programming for streaming data
>> analysis. However, as far as I know, the SET command is only available in
>> the SQL Client, but not in SQL API.
>>
>> Although we understand that developers could simply set TableConfig via
>> tEnv
>> .getConfig().getConfiguration() API, however, we hope that there could be
>> an API like sqlSet() or something like that, to allow for setting table
>> configurations within SQL statements themselves, which paves the way for a
>> unified interface for users to write a Flink SQL job, without the need of
>> writing any Java or Scala code in a production environment.
>>
>> Moreover, it could be much better if there could be an API that
>> automatically detect the type of SQL statement and choose the write logic
>> to execute, instead of manually choosing sqlUpdate or sqlQuery, i.e.
>>
>> sql("CREATE TABLE abc ( a VARCHAR(10), b BIGINT ) WITH ( 'xxx' = 'yyy'
>> )");
>> sql("SET table.exec.mini-batch.enabled = 'true'");
>> sql("INSERT INTO sink SELECT * FROM abc");
>>
>> then, users could simply write their SQL code within .sql files and Flink
>> could read them line by line and call sql() method to parse the code, and
>> eventually submit to the ExecutionEnvironment and run the program in the
>> cluster, which is different from current SQL client whose interactive way
>> of programming is not well suited for production usage.
>>
>> We would like to know if these proposals contradicts with the current plan
>> of the community, or if any other issues that should be addressed before
>> implementing such features.
>>
>> Thanks,
>> Weike
>>
>

Reply via email to