Re: [ANNOUNCE] New Apache Flink Committer - Lijie Wang

2022-08-17 Thread Terry Wang
Congratulations, Lijie!

On Thu, Aug 18, 2022 at 11:31 AM Leonard Xu  wrote:

> Congratulations, Lijie!
>
> Best,
> Leonard
>
> > 2022年8月18日 上午11:26,Zhipeng Zhang  写道:
> >
> > Congratulations, Lijie!
> >
> > Xintong Song  于2022年8月18日周四 11:23写道:
> >>
> >> Congratulations Lijie, and welcome~!
> >>
> >> Best,
> >>
> >> Xintong
> >>
> >>
> >>
> >> On Thu, Aug 18, 2022 at 11:12 AM Xingbo Huang 
> wrote:
> >>
> >>> Congrats, Lijie
> >>>
> >>> Best,
> >>> Xingbo
> >>>
> >>> Lincoln Lee  于2022年8月18日周四 11:01写道:
> >>>
> >>>> Congratulations, Lijie!
> >>>>
> >>>> Best,
> >>>> Lincoln Lee
> >>>>
> >>>>
> >>>> Benchao Li  于2022年8月18日周四 10:51写道:
> >>>>
> >>>>> Congratulations Lijie!
> >>>>>
> >>>>> yanfei lei  于2022年8月18日周四 10:44写道:
> >>>>>
> >>>>>> Congratulations, Lijie!
> >>>>>>
> >>>>>> Best,
> >>>>>> Yanfei
> >>>>>>
> >>>>>> JunRui Lee  于2022年8月18日周四 10:35写道:
> >>>>>>
> >>>>>>> Congratulations, Lijie!
> >>>>>>>
> >>>>>>> Best,
> >>>>>>> JunRui
> >>>>>>>
> >>>>>>> Timo Walther  于2022年8月17日周三 19:30写道:
> >>>>>>>
> >>>>>>>> Congratulations and welcome to the committer team :-)
> >>>>>>>>
> >>>>>>>> Regards,
> >>>>>>>> Timo
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On 17.08.22 12:50, Yuxin Tan wrote:
> >>>>>>>>> Congratulations, Lijie!
> >>>>>>>>>
> >>>>>>>>> Best,
> >>>>>>>>> Yuxin
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Guowei Ma  于2022年8月17日周三 18:42写道:
> >>>>>>>>>
> >>>>>>>>>> Congratulations, Lijie. Welcome on board~!
> >>>>>>>>>> Best,
> >>>>>>>>>> Guowei
> >>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> On Wed, Aug 17, 2022 at 6:25 PM Zhu Zhu 
> >>>> wrote:
> >>>>>>>>>>
> >>>>>>>>>>> Hi everyone,
> >>>>>>>>>>>
> >>>>>>>>>>> On behalf of the PMC, I'm very happy to announce Lijie Wang
> >>> as
> >>>>>>>>>>> a new Flink committer.
> >>>>>>>>>>>
> >>>>>>>>>>> Lijie has been contributing to Flink project for more than 2
> >>>>> years.
> >>>>>>>>>>> He mainly works on the runtime/coordination part, doing
> >>> feature
> >>>>>>>>>>> development, problem debugging and code reviews. He has also
> >>>>>>>>>>> driven the work of FLIP-187(Adaptive Batch Scheduler) and
> >>>>>>>>>>> FLIP-224(Blocklist for Speculative Execution), which are
> >>>>> important
> >>>>>>>>>>> to run batch jobs.
> >>>>>>>>>>>
> >>>>>>>>>>> Please join me in congratulating Lijie for becoming a Flink
> >>>>>>> committer!
> >>>>>>>>>>>
> >>>>>>>>>>> Cheers,
> >>>>>>>>>>> Zhu
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>
> >>>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>>
> >>>>> Best,
> >>>>> Benchao Li
> >>>>>
> >>>>
> >>>
> >
> >
> >
> > --
> > best,
> > Zhipeng
>
>

-- 
Best Regards,
Terry Wang


Re: Re: [ANNOUNCE] New Apache Flink Committers: Qingsheng Ren, Shengkai Fang

2022-06-20 Thread Terry Wang
Congratulations, Qingsheng and ShengKai!


Re: Re: [ANNOUNCE] New Flink PMC member: Yang Wang

2022-05-10 Thread Terry Wang
Congrats Yang!

On Mon, May 9, 2022 at 11:19 AM LuNing Wang  wrote:

> Congrats Yang!
>
> Best,
> LuNing Wang
>
> Dian Fu  于2022年5月7日周六 17:21写道:
>
> > Congrats Yang!
> >
> > Regards,
> > Dian
> >
> > On Sat, May 7, 2022 at 12:51 PM Jacky Lau  wrote:
> >
> > > Congrats Yang and well Deserved!
> > >
> > > Best,
> > > Jacky Lau
> > >
> > > Yun Gao  于2022年5月7日周六 10:44写道:
> > >
> > > > Congratulations Yang!
> > > >
> > > > Best,
> > > > Yun Gao
> > > >
> > > >
> > > >
> > > >  --Original Mail --
> > > > Sender:David Morávek 
> > > > Send Date:Sat May 7 01:05:41 2022
> > > > Recipients:Dev 
> > > > Subject:Re: [ANNOUNCE] New Flink PMC member: Yang Wang
> > > > Nice! Congrats Yang, well deserved! ;)
> > > >
> > > > On Fri 6. 5. 2022 at 17:53, Peter Huang 
> > > > wrote:
> > > >
> > > > > Congrats, Yang!
> > > > >
> > > > >
> > > > >
> > > > > Best Regards
> > > > > Peter Huang
> > > > >
> > > > > On Fri, May 6, 2022 at 8:46 AM Yu Li  wrote:
> > > > >
> > > > > > Congrats and welcome, Yang!
> > > > > >
> > > > > > Best Regards,
> > > > > > Yu
> > > > > >
> > > > > >
> > > > > > On Fri, 6 May 2022 at 14:48, Paul Lam 
> > wrote:
> > > > > >
> > > > > > > Congrats, Yang! Well Deserved!
> > > > > > >
> > > > > > > Best,
> > > > > > > Paul Lam
> > > > > > >
> > > > > > > > 2022年5月6日 14:38,Yun Tang  写道:
> > > > > > > >
> > > > > > > > Congratulations, Yang!
> > > > > > > >
> > > > > > > > Best
> > > > > > > > Yun Tang
> > > > > > > > 
> > > > > > > > From: Jing Ge 
> > > > > > > > Sent: Friday, May 6, 2022 14:24
> > > > > > > > To: dev 
> > > > > > > > Subject: Re: [ANNOUNCE] New Flink PMC member: Yang Wang
> > > > > > > >
> > > > > > > > Congrats Yang and well Deserved!
> > > > > > > >
> > > > > > > > Best regards,
> > > > > > > > Jing
> > > > > > > >
> > > > > > > > On Fri, May 6, 2022 at 7:38 AM Lincoln Lee <
> > > lincoln.8...@gmail.com
> > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > >> Congratulations Yang!
> > > > > > > >>
> > > > > > > >> Best,
> > > > > > > >> Lincoln Lee
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> Őrhidi Mátyás  于2022年5月6日周五
> 12:46写道:
> > > > > > > >>
> > > > > > > >>> Congrats Yang! Well deserved!
> > > > > > > >>> Best,
> > > > > > > >>> Matyas
> > > > > > > >>>
> > > > > > > >>> On Fri, May 6, 2022 at 5:30 AM huweihua <
> > > huweihua@gmail.com>
> > > > > > > wrote:
> > > > > > > >>>
> > > > > > > >>>> Congratulations Yang!
> > > > > > > >>>>
> > > > > > > >>>> Best,
> > > > > > > >>>> Weihua
> > > > > > > >>>>
> > > > > > > >>>>
> > > > > > > >>>
> > > > > > > >>
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


-- 
Best Regards,
Terry Wang


Re: [ANNOUNCE] New PMC member: Yuan Mei

2022-03-17 Thread Terry Wang
> >>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>> Congratulations :)
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>> pon., 14 mar 2022 o 09:59 Yun Tang  > >>>
> > >>>>>>>> napisał(a):
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Congratulations, Yuan!
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>> Yun Tang
> > >>>>>>>>>>>>>> 
> > >>>>>>>>>>>>>> From: Zakelly Lan 
> > >>>>>>>>>>>>>> Sent: Monday, March 14, 2022 16:55
> > >>>>>>>>>>>>>> To: dev@flink.apache.org 
> > >>>>>>>>>>>>>> Subject: Re: [ANNOUNCE] New PMC member: Yuan Mei
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Congratulations, Yuan!
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>> Zakelly
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>> On Mon, Mar 14, 2022 at 4:49 PM Johannes Moser <
> > >>>>>>>>> j...@ververica.com>
> > >>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>> Congrats Yuan.
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> On 14.03.2022, at 09:45, Arvid Heise <
> > >>>> ar...@apache.org
> > >>>>>>>
> > >>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> Congratulations and well deserved!
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>> On Mon, Mar 14, 2022 at 9:30 AM Matthias Pohl <
> > >>>>>>>>>> map...@apache.org
> > >>>>>>>>>>>>
> > >>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> Congratulations, Yuan.
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>> On Mon, Mar 14, 2022 at 9:25 AM Shuo Cheng <
> > >>>>>>>>>> njucs...@gmail.com>
> > >>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> Congratulations, Yuan!
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>> On Mon, Mar 14, 2022 at 4:22 PM Anton
> > >>>> Kalashnikov <
> > >>>>>>>>>>>>>> kaa@yandex.com>
> > >>>>>>>>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> Congratulations, Yuan!
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> --
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> Best regards,
> > >>>>>>>>>>>>>>>>>>> Anton Kalashnikov
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> 14.03.2022 09:13, Leonard Xu пишет:
> > >>>>>>>>>>>>>>>>>>>> Congratulations Yuan!
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>>>>>>>> Leonard
> > >>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> 2022年3月14日 下午4:09,Yangze Guo <
> > >>>> karma...@gmail.com>
> > >>>>>>> 写道:
> > >>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> Congratulations!
> > >>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> Best,
> > >>>>>>>>>>>>>>>>>>>>> Yangze Guo
> > >>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>> On Mon, Mar 14, 2022 at 4:08 PM Martijn
> > >>>> Visser <
> > >>>>>>>>>>>>>>>>>>> martijnvis...@apache.org> wrote:
> > >>>>>>>>>>>>>>>>>>>>>> Congratulations Yuan!
> > >>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>> On Mon, 14 Mar 2022 at 09:02, Yu Li <
> > >>>>>>>> car...@gmail.com>
> > >>>>>>>>>>>> wrote:
> > >>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> Hi all!
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> I'm very happy to announce that Yuan Mei
> > >>> has
> > >>>>>>> joined
> > >>>>>>>>> the
> > >>>>>>>>>>>> Flink
> > >>>>>>>>>>>>>> PMC!
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> Yuan is helping the community a lot with
> > >>>>>> creating
> > >>>>>>>> and
> > >>>>>>>>>>>>> validating
> > >>>>>>>>>>>>>>>>>>> releases,
> > >>>>>>>>>>>>>>>>>>>>>>> contributing to FLIP discussions and
> > >> good
> > >>>> code
> > >>>>>>>>>>> contributions
> > >>>>>>>>>>>>> to
> > >>>>>>>>>>>>>>>>> the
> > >>>>>>>>>>>>>>>>>>>>>>> state backend and related components.
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> Congratulations and welcome, Yuan!
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>>>>> Best Regards,
> > >>>>>>>>>>>>>>>>>>>>>>> Yu (On behalf of the Apache Flink PMC)
> > >>>>>>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> --
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>> Best regards,
> > >>>>>>>>>>>>>>>>>>> Anton Kalashnikov
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>>
> > >>>>>>>>>>>>>>
> > >>>>>>>>>>>>>
> > >>>>>>>>>>>>
> > >>>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>>
> > >>>>>>>>>> --
> > >>>>>>>>>>
> > >>>>>>>>>> Konstantin Knauf
> > >>>>>>>>>>
> > >>>>>>>>>> https://twitter.com/snntrable
> > >>>>>>>>>>
> > >>>>>>>>>> https://github.com/knaufk
> > >>>>>>>>>>
> > >>>>>>>>>
> > >>>>>>>>
> > >>>>>>>
> > >>>>>>>
> > >>>>>>
> > >>>>>> --
> > >>>>>>
> > >>>>>> Best,
> > >>>>>> Benchao Li
> > >>>>>>
> > >>>>
> > >>>
> > >>
> >
> >
>


-- 
Best Regards,
Terry Wang


Re: [VOTE][FLIP-195] Improve the name and structure of vertex and operator name for job

2021-11-23 Thread Terry Wang
+1(non-binding)

Very helpful improvement!

Best,
Terry Wang



> 2021年11月23日 下午3:59,wenlong.lwl  写道:
> 
> Hi everyone,
> 
> Based on the discussion[1], we seem to have consensus, so I would like to
> start a vote on FLIP-195 [2].
> Thanks for all of your feedback.
> 
> The vote will last for at least 72 hours (Nov 26th 16:00 GMT) unless
> there is an objection or insufficient votes.
> 
> [1] https://lists.apache.org/thread/kvdxr8db0l5s6wk7hwlt0go5fms99b8t
> [2]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-195%3A+Improve+the+name+and+structure+of+vertex+and+operator+name+for+job
> 
> Best,
> Wenlong Lyu



Re: [ANNOUNCE] New Apache Flink Committer - Leonard Xu

2021-11-15 Thread Terry Wang
Congratulations,  Leonard Xu! 
Well deserved!

Best,
Terry Wang



> 2021年11月16日 下午1:59,OpenInx  写道:
> 
> Congrats,  Leonard!
> 
> On Tue, Nov 16, 2021 at 12:10 AM Leonard Xu  wrote:
> 
>> Thank you all, it’s an honor to work in our community with everyone.
>> 
>> I will continue to contribute to the Flink community and Flink
>> ecology(e.g.Flink CDC Connectors).
>> 
>> Best,
>> Leonard
>> 
>> 
>> 
>>> 在 2021年11月15日,22:27,Dawid Wysakowicz  写道:
>>> 
>>> Congrats!
>>> 
>>> On 12/11/2021 05:12, Jark Wu wrote:
>>>> Hi everyone,
>>>> 
>>>> On behalf of the PMC, I'm very happy to announce Leonard Xu as a new
>> Flink
>>>> committer.
>>>> 
>>>> Leonard has been a very active contributor for more than two year,
>> authored
>>>> 150+ PRs and reviewed many PRs which is quite outstanding.
>>>> Leonard mainly works on Flink SQL parts and drives several important
>> FLIPs,
>>>> e.g. FLIP-132 (temporal table join) and FLIP-162 (correct time
>> behaviors).
>>>> He is also the maintainer of flink-cdc-connectors[1] project which
>> helps a
>>>> lot for users building a real-time data warehouse and data lake.
>>>> 
>>>> Please join me in congratulating Leonard for becoming a Flink committer!
>>>> 
>>>> Cheers,
>>>> Jark Wu
>>>> 
>>>> [1]: https://github.com/ververica/flink-cdc-connectors
>>>> 
>>> 
>> 
>> 



Re: [ANNOUNCE] New Apache Flink Committer - Jing Zhang

2021-11-15 Thread Terry Wang
Congratulations,  Jing! 
Well deserved!

Best,
Terry Wang



> 2021年11月16日 上午11:27,Zhilong Hong  写道:
> 
> Congratulations, Jing!
> 
> Best regards,
> Zhilong Hong
> 
> On Mon, Nov 15, 2021 at 9:41 PM Martijn Visser 
> wrote:
> 
>> Congratulations Jing!
>> 
>> On Mon, 15 Nov 2021 at 14:39, Timo Walther  wrote:
>> 
>>> Hi everyone,
>>> 
>>> On behalf of the PMC, I'm very happy to announce Jing Zhang as a new
>>> Flink committer.
>>> 
>>> Jing has been very active in the Flink community esp. in the Table/SQL
>>> area for quite some time: 81 PRs [1] in total and is also active on
>>> answering questions on the user mailing list. She is currently
>>> contributing a lot around the new windowing table-valued functions [2].
>>> 
>>> Please join me in congratulating Jing Zhang for becoming a Flink
>> committer!
>>> 
>>> Thanks,
>>> Timo
>>> 
>>> [1] https://github.com/apache/flink/pulls/beyond1920
>>> [2] https://issues.apache.org/jira/browse/FLINK-23997
>>> 
>> 



[jira] [Created] (FLINK-23289) BinarySection should null check in contusctor method

2021-07-06 Thread Terry Wang (Jira)
Terry Wang created FLINK-23289:
--

 Summary: BinarySection should null check in contusctor method
 Key: FLINK-23289
 URL: https://issues.apache.org/jira/browse/FLINK-23289
 Project: Flink
  Issue Type: Improvement
Reporter: Terry Wang



{code:java}
Caused by: java.lang.NullPointerException
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.inFirstSegment(BinarySegmentUtils.java:411)
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.copyToBytes(BinarySegmentUtils.java:132)
    at 
org.apache.flink.table.data.binary.BinarySegmentUtils.copyToBytes(BinarySegmentUtils.java:118)
    at 
org.apache.flink.table.data.binary.BinaryStringData.copy(BinaryStringData.java:360)
    at 
org.apache.flink.table.runtime.typeutils.StringDataSerializer.copy(StringDataSerializer.java:59)
    at 
org.apache.flink.table.runtime.typeutils.StringDataSerializer.copy(StringDataSerializer.java:37)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copyGenericArray(ArrayDataSerializer.java:128)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:86)
    at 
org.apache.flink.table.runtime.typeutils.ArrayDataSerializer.copy(ArrayDataSerializer.java:47)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:170)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:131)
    at 
org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:48)
    at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner$CalcCollectionCollector.collect(AsyncLookupJoinWithCalcRunner.java:152)
    at 
org.apache.flink.table.runtime.operators.join.lookup.AsyncLookupJoinWithCalcRunner$CalcCollectionCollector.collect(AsyncLookupJoinWithCalcRunner.java:142)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21972) Check whether TemporalTableSourceSpec can be serialized or not

2021-03-25 Thread Terry Wang (Jira)
Terry Wang created FLINK-21972:
--

 Summary: Check whether TemporalTableSourceSpec  can be serialized 
or not 
 Key: FLINK-21972
 URL: https://issues.apache.org/jira/browse/FLINK-21972
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Check whether TemporalTableSourceSpec  can be serialized or not 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21868) Support StreamExecLookupJoin json serialization/deserialization

2021-03-19 Thread Terry Wang (Jira)
Terry Wang created FLINK-21868:
--

 Summary: Support StreamExecLookupJoin json 
serialization/deserialization
 Key: FLINK-21868
 URL: https://issues.apache.org/jira/browse/FLINK-21868
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecLookupJoin json serialization/deserialization



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21864) Support StreamExecTemporalJoin json serialization/deserialization

2021-03-18 Thread Terry Wang (Jira)
Terry Wang created FLINK-21864:
--

 Summary: Support StreamExecTemporalJoin json 
serialization/deserialization
 Key: FLINK-21864
 URL: https://issues.apache.org/jira/browse/FLINK-21864
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecTemporalJoin json serialization/deserialization



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21837) Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des

2021-03-16 Thread Terry Wang (Jira)
Terry Wang created FLINK-21837:
--

 Summary: Support 
StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json ser/des
 Key: FLINK-21837
 URL: https://issues.apache.org/jira/browse/FLINK-21837
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


Support StreamExecIntervalJoin/StreamExecLookupJoin/StreamExecTemporalJoin json 
ser/des



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21811) Support StreamExecJoin json serialization/deserialization

2021-03-16 Thread Terry Wang (Jira)
Terry Wang created FLINK-21811:
--

 Summary: Support StreamExecJoin json serialization/deserialization
 Key: FLINK-21811
 URL: https://issues.apache.org/jira/browse/FLINK-21811
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21802) LogicalTypeJsonDeserializer/Serializer custom RowType/MapType/ArrayType/MultisetType

2021-03-15 Thread Terry Wang (Jira)
Terry Wang created FLINK-21802:
--

 Summary: LogicalTypeJsonDeserializer/Serializer custom 
RowType/MapType/ArrayType/MultisetType
 Key: FLINK-21802
 URL: https://issues.apache.org/jira/browse/FLINK-21802
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang
 Fix For: 1.13.0


We should custom  RowType/MapType/ArrayType/MultiSetType deserialize/serializer 
method to allow some special LogicalType such as TimestampType's kind field.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-21744) Support StreamExecDeduplicate json serialization/deserialization

2021-03-12 Thread Terry Wang (Jira)
Terry Wang created FLINK-21744:
--

 Summary: Support StreamExecDeduplicate json 
serialization/deserialization
 Key: FLINK-21744
 URL: https://issues.apache.org/jira/browse/FLINK-21744
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17553) Group by constant and window causes error: Unsupported call: TUMBLE_END(TIMESTAMP(3) NOT NULL)

2020-05-07 Thread Terry Wang (Jira)
Terry Wang created FLINK-17553:
--

 Summary: Group by constant and window causes error:  Unsupported 
call: TUMBLE_END(TIMESTAMP(3) NOT NULL)
 Key: FLINK-17553
 URL: https://issues.apache.org/jira/browse/FLINK-17553
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17313) Validation error when insert decimal/timestamp/varchar with precision into sink using TypeInformation of row

2020-04-21 Thread Terry Wang (Jira)
Terry Wang created FLINK-17313:
--

 Summary: Validation error when insert decimal/timestamp/varchar 
with precision into sink using TypeInformation of row
 Key: FLINK-17313
 URL: https://issues.apache.org/jira/browse/FLINK-17313
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Terry Wang


Test code like follwing(in blink planner):
{code:java}
tEnv.sqlUpdate("create table randomSource (" +
"   a varchar(10)," 
+
"   b 
decimal(20,2)" +
"   ) with (" +
"   'type' = 
'random'," +
"   'count' = '10'" 
+
"   )");
tEnv.sqlUpdate("create table printSink (" +
"   a varchar(10)," 
+
"   b 
decimal(22,2)," +
"   c 
timestamp(3)," +
"   ) with (" +
"   'type' = 'print'" +
"   )");
tEnv.sqlUpdate("insert into printSink select *, 
current_timestamp from randomSource");
tEnv.execute("");
{code}

Print TableSink implements UpsertStreamTableSink and it's getReocrdType is as 
following:


{code:java}
public TypeInformation getRecordType() {
return getTableSchema().toRowType();
}
{code}


varchar type exception is:


||Heading 1||
|org.apache.flink.table.api.ValidationException: Type VARCHAR(10) of table 
field 'a' does not match with the physical type STRING of the 'a' field of the 
TableSink consumed type.

at 
org.apache.flink.table.utils.TypeMappingUtils.lambda$checkPhysicalLogicalTypeCompatible$4(TypeMappingUtils.java:165)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:278)
at 
org.apache.flink.table.utils.TypeMappingUtils$1.defaultMethod(TypeMappingUtils.java:255)
at 
org.apache.flink.table.types.logical.utils.LogicalTypeDefaultVisitor.visit(LogicalTypeDefaultVisitor.java:67)
at 
org.apache.flink.table.types.logical.VarCharType.accept(VarCharType.java:157)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkIfCompatible(TypeMappingUtils.java:255)
at 
org.apache.flink.table.utils.TypeMappingUtils.checkPhysicalLogicalTypeCompatible(TypeMappingUtils.java:161)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$$anonfun$validateLogicalPhysicalTypesCompatible$1.apply$mcVI$sp(TableSinkUtils.scala:315)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:160)
at 
org.apache.flink.table.planner.sinks.TableSinkUtils$.validateLogicalPhysicalTypesCompatible(TableSinkUtils.scala:308)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:195)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$2.apply(PlannerBase.scala:191)
at scala.Option.map(Option.scala:146)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translateToRel(PlannerBase.scala:191)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
org.apache.flink.table.planner.delegation.PlannerBase$$anonfun$1.apply(PlannerBase.scala:150)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at 
scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at 
org.apache.flink.table.planner.delegation.PlannerBase.translate(PlannerBase.scala:150)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.translate(TableEnvironmentImpl.java:863)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.trans

[jira] [Created] (FLINK-17263) Remove RepeatFamilyOperandTypeChecker in blink planner and replace it with calcite's CompositeOperandTypeChecker

2020-04-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-17263:
--

 Summary: Remove RepeatFamilyOperandTypeChecker in blink planner 
and replace it  with calcite's CompositeOperandTypeChecker
 Key: FLINK-17263
 URL: https://issues.apache.org/jira/browse/FLINK-17263
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Terry Wang


Remove RepeatFamilyOperandTypeChecker in blink planner and replace it  with 
calcite's CompositeOperandTypeChecker.
It seems that what CompositeOperandTypeChecker can do is a super set of 
RepeatFamilyOperandTypeChecker. To keep code easy to read, it's better to do 
such refactor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] New Apache Flink PMC Member - Hequn Chen

2020-04-19 Thread Terry Wang
Congratulations Hequn !!!
Best,
Terry Wang



> 2020年4月20日 10:20,Jingsong Li  写道:
> 
> Congratulations Hequn!
> 
> Best,
> Jingsong Lee
> 
> On Mon, Apr 20, 2020 at 9:52 AM jincheng sun 
> wrote:
> 
>> Congratulations and welcome on board Hequn!
>> 
>> Best,
>> Jincheng
>> 
>> 
>> 
>> Zhijiang  于2020年4月19日周日 下午10:47写道:
>> 
>>> Congratulations, Hequn!
>>> 
>>> Best,
>>> Zhijiang
>>> 
>>> 
>>> --
>>> From:Yun Gao 
>>> Send Time:2020 Apr. 19 (Sun.) 21:53
>>> To:dev 
>>> Subject:Re: [ANNOUNCE] New Apache Flink PMC Member - Hequn Chen
>>> 
>>>   Congratulations Hequn!
>>> 
>>>   Best,
>>>Yun
>>> 
>>> 
>>> --
>>> From:Hequn Cheng 
>>> Send Time:2020 Apr. 18 (Sat.) 12:48
>>> To:dev 
>>> Subject:Re: [ANNOUNCE] New Apache Flink PMC Member - Hequn Chen
>>> 
>>> Many thanks for your support. Thank you!
>>> 
>>> Best,
>>> Hequn
>>> 
>>> On Sat, Apr 18, 2020 at 1:27 AM Jacky Bai 
>> wrote:
>>> 
>>>> Congratulations!Hequn Chen.I hope to make so many contributions to
>> Flink
>>>> like you.
>>>> 
>>>> Best
>>>> Bai Xu
>>>> 
>>>> Congxian Qiu  于2020年4月17日周五 下午10:47写道:
>>>> 
>>>>> Congratulations, Hequn!
>>>>> 
>>>>> Best,
>>>>> Congxian
>>>>> 
>>>>> 
>>>>> Yu Li  于2020年4月17日周五 下午9:36写道:
>>>>> 
>>>>>> Congratulations, Hequn!
>>>>>> 
>>>>>> Best Regards,
>>>>>> Yu
>>>>>> 
>>>>>> 
>>>>>> On Fri, 17 Apr 2020 at 21:22, Kurt Young  wrote:
>>>>>> 
>>>>>>> Congratulations Hequn!
>>>>>>> 
>>>>>>> Best,
>>>>>>> Kurt
>>>>>>> 
>>>>>>> 
>>>>>>> On Fri, Apr 17, 2020 at 8:57 PM Till Rohrmann <
>>> trohrm...@apache.org>
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> Congratulations Hequn!
>>>>>>>> 
>>>>>>>> Cheers,
>>>>>>>> Till
>>>>>>>> 
>>>>>>>> On Fri, Apr 17, 2020 at 2:49 PM Shuo Cheng >> 
>>>>> wrote:
>>>>>>>> 
>>>>>>>>> Congratulations, Hequn
>>>>>>>>> 
>>>>>>>>> Best,
>>>>>>>>> Shuo
>>>>>>>>> 
>>>>>>>>> On 4/17/20, hufeih...@mails.ucas.ac.cn <
>>>> hufeih...@mails.ucas.ac.cn
>>>>>> 
>>>>>>>> wrote:
>>>>>>>>>> Congratulations , Hequn
>>>>>>>>>> 
>>>>>>>>>> Best wish
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> hufeih...@mails.ucas.ac.cn
>>>>>>>>>> Congratulations, Hequn!
>>>>>>>>>> 
>>>>>>>>>> Paul Lam  于2020年4月17日周五 下午3:02写道:
>>>>>>>>>> 
>>>>>>>>>>> Congrats Hequn! Thanks a lot for your contribution to the
>>>>>> community!
>>>>>>>>>>> 
>>>>>>>>>>> Best,
>>>>>>>>>>> Paul Lam
>>>>>>>>>>> 
>>>>>>>>>>> Dian Fu  于2020年4月17日周五 下午2:58写道:
>>>>>>>>>>> 
>>>>>>>>>>>> Congratulations, Hequn!
>>>>>>>>>>>> 
>>>>>>>>>>>>> 在 2020年4月17日,下午2:36,Becket Qin 
>>> 写道:
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hi all,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> I am glad to announce that Hequn Chen has joined the
>>> Flink
>>>>>> PMC.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Hequn has contributed to Flink for years. He has
>> worked
>>> on
>>>>>>> several
>>>>>>>>>>>>> components including Table / SQL,PyFlink and Flink ML
>>>>>> Pipeline.
>>>>>>>>>>> Besides,
>>>>>>>>>>>>> Hequn is also very active in the community since the
>>>>>> beginning.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Congratulations, Hequn! Looking forward to your future
>>>>>>>>> contributions.
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>> 
>>>>>>>>>>>>> Jiangjie (Becket) Qin
>>>>>>>>>>>>> (On behalf of the Apache Flink PMC)
>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> --
>>>>>>>>>> Best Regards
>>>>>>>>>> 
>>>>>>>>>> Jeff Zhang
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>> 
>>>> 
>>> 
>>> 
>>> 
>> 
> 
> 
> -- 
> Best, Jingsong Lee



[jira] [Created] (FLINK-17152) FunctionDefinitionUtil generate wrong resultType and acc type of AggregateFunctionDefinition

2020-04-15 Thread Terry Wang (Jira)
Terry Wang created FLINK-17152:
--

 Summary: FunctionDefinitionUtil generate wrong resultType and  acc 
type of AggregateFunctionDefinition
 Key: FLINK-17152
 URL: https://issues.apache.org/jira/browse/FLINK-17152
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Runtime
Affects Versions: 1.10.0
Reporter: Terry Wang


FunctionDefinitionUtil generate wrong resultType and  acc type of 
AggregateFunctionDefinition. This bug will  lead to unexpect error such as: 
Field types of query result and registered TableSink do not  match.
Query schema: [v:RAW(IAccumulator, ?)]
Sink schema: [v: STRING]





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-84: Improve & Refactor API of TableEnvironment & Table

2020-04-03 Thread Terry Wang
+1 (non-binding)
Looks great to me, Thanks for driving on this.

Best,
Terry Wang



> 2020年4月3日 21:07,godfrey he  写道:
> 
> Hi everyone,
> 
> I'd like to start the vote of FLIP-84[1] again, which is discussed and
> reached consensus in the discussion thread[2].
> 
> The vote will be open for at least 72 hours. Unless there is an objection,
> I will try to close it by Apr 6, 2020 13:10 UTC if we have received
> sufficient votes.
> 
> 
> [1]
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
> 
> [2]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-84-Feedback-Summary-td39261.html
> 
> 
> Bests,
> Godfrey
> 
> godfrey he  于2020年3月31日周二 下午8:42写道:
> 
>> Hi, Timo
>> 
>> So sorry about that, I'm in a little hurry. Let's wait for 24h.
>> 
>> Best,
>> Godfrey
>> 
>> Timo Walther  于2020年3月31日周二 下午5:26写道:
>> 
>>> -1
>>> 
>>> The current discussion has not completed. The last comments were sent
>>> less than 24h ago.
>>> 
>>> Let's wait a bit longer to collect feedback from all stakeholders.
>>> 
>>> Thanks,
>>> Timo
>>> 
>>> On 31.03.20 08:31, godfrey he wrote:
>>>> Hi everyone,
>>>> 
>>>> I'd like to start the vote of FLIP-84[1] again, because we have some
>>>> feedbacks. The feedbacks are all about new introduced methods, here is
>>> the
>>>> discussion thread [2].
>>>> 
>>>> The vote will be open for at least 72 hours. Unless there is an
>>> objection,
>>>> I will try to close it by Apr 3, 2020 06:30 UTC if we have received
>>>> sufficient votes.
>>>> 
>>>> 
>>>> [1]
>>>> 
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
>>>> 
>>>> [2]
>>>> 
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-84-Feedback-Summary-td39261.html
>>>> 
>>>> 
>>>> Bests,
>>>> Godfrey
>>>> 
>>> 
>>> 



[jira] [Created] (FLINK-16924) TableEnvironment#sqlUpdate throw NPE when called in async thread

2020-04-01 Thread Terry Wang (Jira)
Terry Wang created FLINK-16924:
--

 Summary: TableEnvironment#sqlUpdate throw NPE when called in async 
thread
 Key: FLINK-16924
 URL: https://issues.apache.org/jira/browse/FLINK-16924
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Affects Versions: 1.11.0
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16506) Sql

2020-03-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-16506:
--

 Summary: Sql
 Key: FLINK-16506
 URL: https://issues.apache.org/jira/browse/FLINK-16506
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16414) create udaf/udtf function using sql casuing ValidationException: SQL validation failed. null

2020-03-03 Thread Terry Wang (Jira)
Terry Wang created FLINK-16414:
--

 Summary: create udaf/udtf function using sql casuing 
ValidationException: SQL validation failed. null
 Key: FLINK-16414
 URL: https://issues.apache.org/jira/browse/FLINK-16414
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Affects Versions: 1.10.0
Reporter: Terry Wang


When using TableEnvironment.sqlupdate() to create a udaf or udtf function, if 
the function doesn't override the getResultType() method, it's normal. But when 
using this function in the insert sql,  some exception like following will be 
throwed:

Exception in thread "main" org.apache.flink.table.api.ValidationException: SQL 
validation failed. null
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:130)
at 
org.apache.flink.table.planner.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:105)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:127)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:342)
at 
org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:142)
at 
org.apache.flink.table.planner.delegation.ParserImpl.parse(ParserImpl.java:66)
at 
org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlUpdate(TableEnvironmentImpl.java:484)

The reason is in FunctionDefinitionUtil#createFunctionDefinition, we shouldn't 
direct call t.getResultType or a.getAccumulatorType() or a.getResultType() but 
using UserDefinedFunctionHelper#getReturnTypeOfTableFunction
 UserDefinedFunctionHelper#getAccumulatorTypeOfAggregateFunction 
UserDefinedFunctionHelper#getReturnTypeOfAggregateFunction instead.
```

if (udf instanceof ScalarFunction) {
return new ScalarFunctionDefinition(
name,
(ScalarFunction) udf
);
} else if (udf instanceof TableFunction) {
TableFunction t = (TableFunction) udf;
return new TableFunctionDefinition(
name,
t,
t.getResultType()
);
} else if (udf instanceof AggregateFunction) {
AggregateFunction a = (AggregateFunction) udf;

return new AggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
} else if (udf instanceof TableAggregateFunction) {
TableAggregateFunction a = (TableAggregateFunction) udf;

return new TableAggregateFunctionDefinition(
name,
a,
a.getAccumulatorType(),
a.getResultType()
);
```






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [ANNOUNCE] Jingsong Lee becomes a Flink committer

2020-03-01 Thread Terry Wang
Congratulations,  well deserved.!

Best,
Terry Wang



> 2020年2月27日 10:50,Yuan Mei  写道:
> 
> Congrats!
> 
> Best
> Yuan
> 
> On Thu, Feb 27, 2020 at 8:48 AM Guowei Ma  wrote:
> 
>> Congratulations !!
>> Best,
>> Guowei
>> 
>> 
>> Yun Tang  于2020年2月27日周四 上午2:11写道:
>> 
>>> Congratulations and well deserved!
>>> 
>>> 
>>> Best
>>> Yun Tang
>>> 
>>> From: Canbin Zheng 
>>> Sent: Monday, February 24, 2020 16:07
>>> To: dev 
>>> Subject: Re: [ANNOUNCE] Jingsong Lee becomes a Flink committer
>>> 
>>> Congratulations !!
>>> 
>>> Dawid Wysakowicz  于2020年2月24日周一 下午3:55写道:
>>> 
>>>> Congratulations Jingsong!
>>>> 
>>>> Best,
>>>> 
>>>> Dawid
>>>> 
>>>> On 24/02/2020 08:12, zhenya Sun wrote:
>>>>> Congratulations!!!
>>>>> | |
>>>>> zhenya Sun
>>>>> |
>>>>> |
>>>>> toke...@126.com
>>>>> |
>>>>> 签名由网易邮箱大师定制
>>>>> 
>>>>> 
>>>>> On 02/24/2020 14:35,Yu Li wrote:
>>>>> Congratulations Jingsong! Well deserved.
>>>>> 
>>>>> Best Regards,
>>>>> Yu
>>>>> 
>>>>> 
>>>>> On Mon, 24 Feb 2020 at 14:10, Congxian Qiu 
>>>> wrote:
>>>>> 
>>>>> Congratulations Jingsong!
>>>>> 
>>>>> Best,
>>>>> Congxian
>>>>> 
>>>>> 
>>>>> jincheng sun  于2020年2月24日周一 下午1:38写道:
>>>>> 
>>>>> Congratulations Jingsong!
>>>>> 
>>>>> Best,
>>>>> Jincheng
>>>>> 
>>>>> 
>>>>> Zhu Zhu  于2020年2月24日周一 上午11:55写道:
>>>>> 
>>>>> Congratulations Jingsong!
>>>>> 
>>>>> Thanks,
>>>>> Zhu Zhu
>>>>> 
>>>>> Fabian Hueske  于2020年2月22日周六 上午1:30写道:
>>>>> 
>>>>> Congrats Jingsong!
>>>>> 
>>>>> Cheers, Fabian
>>>>> 
>>>>> Am Fr., 21. Feb. 2020 um 17:49 Uhr schrieb Rong Rong <
>>>>> walter...@gmail.com>:
>>>>> 
>>>>> Congratulations Jingsong!!
>>>>> 
>>>>> Cheers,
>>>>> Rong
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 8:45 AM Bowen Li 
>> wrote:
>>>>> 
>>>>> Congrats, Jingsong!
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 7:28 AM Till Rohrmann >>>> 
>>>>> wrote:
>>>>> 
>>>>> Congratulations Jingsong!
>>>>> 
>>>>> Cheers,
>>>>> Till
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 4:03 PM Yun Gao 
>>>>> wrote:
>>>>> 
>>>>> Congratulations Jingsong!
>>>>> 
>>>>> Best,
>>>>> Yun
>>>>> 
>>>>> --
>>>>> From:Jingsong Li 
>>>>> Send Time:2020 Feb. 21 (Fri.) 21:42
>>>>> To:Hequn Cheng 
>>>>> Cc:Yang Wang ; Zhijiang <
>>>>> wangzhijiang...@aliyun.com>; Zhenghua Gao ;
>>>>> godfrey
>>>>> he ; dev ; user <
>>>>> u...@flink.apache.org>
>>>>> Subject:Re: [ANNOUNCE] Jingsong Lee becomes a Flink committer
>>>>> 
>>>>> Thanks everyone~
>>>>> 
>>>>> It's my pleasure to be part of the community. I hope I can make a
>>>>> better
>>>>> contribution in future.
>>>>> 
>>>>> Best,
>>>>> Jingsong Lee
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 2:48 PM Hequn Cheng 
>>>>> wrote:
>>>>> Congratulations Jingsong! Well deserved.
>>>>> 
>>>>> Best,
>>>>> Hequn
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 2:42 PM Yang Wang 
>>>>> wrote:
>>>>> Congratulations!Jingsong. Well deserved.
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Yang
>>>>> 
>>>>> Zhijiang  于2020年2月21日周五 下午1:18写道:
>>>>> Congrats Jingsong! Welcome on board!
>>>>> 
>>>>> Best,
>>>>> Zhijiang
>>>>> 
>>>>> --
>>>>> From:Zhenghua Gao 
>>>>> Send Time:2020 Feb. 21 (Fri.) 12:49
>>>>> To:godfrey he 
>>>>> Cc:dev ; user 
>>>>> Subject:Re: [ANNOUNCE] Jingsong Lee becomes a Flink committer
>>>>> 
>>>>> Congrats Jingsong!
>>>>> 
>>>>> 
>>>>> *Best Regards,*
>>>>> *Zhenghua Gao*
>>>>> 
>>>>> 
>>>>> On Fri, Feb 21, 2020 at 11:59 AM godfrey he 
>>>>> wrote:
>>>>> Congrats Jingsong! Well deserved.
>>>>> 
>>>>> Best,
>>>>> godfrey
>>>>> 
>>>>> Jeff Zhang  于2020年2月21日周五 上午11:49写道:
>>>>> Congratulations!Jingsong. You deserve it
>>>>> 
>>>>> wenlong.lwl  于2020年2月21日周五 上午11:43写道:
>>>>> Congrats Jingsong!
>>>>> 
>>>>> On Fri, 21 Feb 2020 at 11:41, Dian Fu 
>>>>> wrote:
>>>>> 
>>>>> Congrats Jingsong!
>>>>> 
>>>>> 在 2020年2月21日,上午11:39,Jark Wu  写道:
>>>>> 
>>>>> Congratulations Jingsong! Well deserved.
>>>>> 
>>>>> Best,
>>>>> Jark
>>>>> 
>>>>> On Fri, 21 Feb 2020 at 11:32, zoudan  wrote:
>>>>> 
>>>>> Congratulations! Jingsong
>>>>> 
>>>>> 
>>>>> Best,
>>>>> Dan Zou
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Best Regards
>>>>> 
>>>>> Jeff Zhang
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Best, Jingsong Lee
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 



Re: [VOTE] FLIP-93: JDBC catalog and Postgres catalog

2020-03-01 Thread Terry Wang
+1 (non-binding). 
With this feature, we can more easily interact traditional database in flink. 

Best,
Terry Wang



> 2020年3月1日 18:33,zoudan  写道:
> 
> +1 (non-binding)
> 
> Best,
> Dan Zou
> 
> 
>> 在 2020年2月28日,02:38,Bowen Li  写道:
>> 
>> Hi all,
>> 
>> I'd like to kick off the vote for FLIP-93 [1] to add JDBC catalog and
>> Postgres catalog.
>> 
>> The vote will last for at least 72 hours, following the consensus voting
>> protocol.
>> 
>> [1]
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-93%3A+JDBC+catalog+and+Postgres+catalog
>> 
>> Discussion thread:
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-92-JDBC-catalog-and-Postgres-catalog-td36505.html
> 



Re: [VOTE] FLIP-84: Improve & Refactor API of TableEnvironment

2020-02-27 Thread Terry Wang
I look through the whole design and it’s a big improvement of usability on 
TableEnvironment’s api.

+1 (non-binding)

Best,
Terry Wang



> 2020年2月27日 14:59,godfrey he  写道:
> 
> Hi everyone,
> 
> I'd like to start the vote of FLIP-84[1], which proposes to deprecate some
> old APIs and introduce some new APIs in TableEnvironment. This FLIP is
> discussed and reached consensus in the discussion thread[2].
> 
> The vote will be open for at least 72 hours. Unless there is an objection,
> I will try to close it by Mar 1, 2020 07:00 UTC if we have received
> sufficient votes.
> 
> 
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-84%3A+Improve+%26+Refactor+API+of+TableEnvironment
> 
> [2]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-84-Improve-amp-Refactor-API-of-Table-Module-td34537.html
> 
> 
> Bests,
> Godfrey



Re: [ANNOUNCE] Dian Fu becomes a Flink committer

2020-01-16 Thread Terry Wang
Congratulations! 

Best,
Terry Wang



> 2020年1月17日 14:09,Biao Liu  写道:
> 
> Congrats!
> 
> Thanks,
> Biao /'bɪ.aʊ/
> 
> 
> 
> On Fri, 17 Jan 2020 at 13:43, Rui Li  <mailto:lirui.fu...@gmail.com>> wrote:
> Congratulations Dian, well deserved!
> 
> On Thu, Jan 16, 2020 at 5:58 PM jincheng sun  <mailto:sunjincheng...@gmail.com>> wrote:
> Hi everyone,
> 
> I'm very happy to announce that Dian accepted the offer of the Flink PMC to 
> become a committer of the Flink project.
> 
> Dian Fu has been contributing to Flink for many years. Dian Fu played an 
> essential role in PyFlink/CEP/SQL/Table API modules. Dian Fu has contributed 
> several major features, reported and fixed many bugs, spent a lot of time 
> reviewing pull requests and also frequently helping out on the user mailing 
> lists and check/vote the release.
>  
> Please join in me congratulating Dian for becoming a Flink committer !
> 
> Best, 
> Jincheng(on behalf of the Flink PMC)
> 
> 
> -- 
> Best regards!
> Rui Li



[jira] [Created] (FLINK-15552) SQL Client can not correctly create kafka table using --library to indicate a kafka connector directory

2020-01-10 Thread Terry Wang (Jira)
Terry Wang created FLINK-15552:
--

 Summary: SQL Client can not correctly create kafka table using 
--library to indicate a kafka connector directory
 Key: FLINK-15552
 URL: https://issues.apache.org/jira/browse/FLINK-15552
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client, Table SQL / Runtime
Reporter: Terry Wang


How to Reproduce:
first, I start a sql client and using `-l` to point to a kafka connector 
directory.

`
 bin/sql-client.sh embedded -l /xx/connectors/kafka/

`

Then, I create a Kafka Table like following 
`
Flink SQL> CREATE TABLE MyUserTable (
>   content String
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'test',
>   'connector.properties.zookeeper.connect' = 'localhost:2181',
>   'connector.properties.bootstrap.servers' = 'localhost:9092',
>   'connector.properties.group.id' = 'testGroup',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv'
>  );
[INFO] Table has been created.
`

Then I select from just created table and an exception been thrown: 

`
Flink SQL> select * from MyUserTable;
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a 
suitable table factory for 
'org.apache.flink.table.factories.TableSourceFactory' in
the classpath.

Reason: Required context properties mismatch.

The matching candidates:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
Mismatched properties:
'connector.type' expects 'filesystem', but is 'kafka'

The following properties are requested:
connector.properties.bootstrap.servers=localhost:9092
connector.properties.group.id=testGroup
connector.properties.zookeeper.connect=localhost:2181
connector.startup-mode=earliest-offset
connector.topic=test
connector.type=kafka
connector.version=universal
format.type=csv
schema.0.data-type=VARCHAR(2147483647)
schema.0.name=content

The following factories have been considered:
org.apache.flink.table.sources.CsvBatchTableSourceFactory
org.apache.flink.table.sources.CsvAppendTableSourceFactory
`
Potential Reasons:
Now we use  `TableFactoryUtil#findAndCreateTableSource`  to convert a 
CatalogTable to TableSource,  but when call `TableFactoryService.find` we don't 
pass current classLoader to the this method, the defualt loader will be 
BootStrapClassLoader, which can not find our factory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15544) Upgrade http-core version to avoid potential DeadLock problem

2020-01-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15544:
--

 Summary: Upgrade http-core version to avoid potential DeadLock 
problem
 Key: FLINK-15544
 URL: https://issues.apache.org/jira/browse/FLINK-15544
 Project: Flink
  Issue Type: Bug
  Components: Build System
Reporter: Terry Wang


Due to the bug of http-core:4..46 (we current use) 
https://issues.apache.org/jira/browse/HTTPCORE-446, it may cause the bug of 
deadLock, we should upgrade the version.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Set default planner for SQL Client to Blink planner in 1.10 release

2020-01-02 Thread Terry Wang
Since what blink planner can do is a superset of flink planner, big +1 for 
changing the default planner to Blink planner from my side.

Best,
Terry Wang



> 2020年1月3日 15:00,Jark Wu  写道:
> 
> Hi everyone,
> 
> In 1.10 release, Flink SQL supports many awesome features and improvements,
> including:
> - support watermark statement and computed column in DDL
> - fully support all data types in Hive
> - Batch SQL performance improvements (TPC-DS 7x than Hive MR)
> - support INSERT OVERWRITE and INSERT PARTITION
> 
> However, all the features and improvements are only avaiable in Blink
> planner, not in Old planner.
> There are also some other features are limited in Blink planner, e.g.
> Dimension Table Join [1],
> TopN [2], Deduplicate [3], streaming aggregates optimization [4], and so on.
> 
> But Old planner is still the default planner in Table API & SQL. It is
> frustrating for users to set
> to blink planner manually when every time start a SQL CLI. And it's
> surprising to see unsupported
> exception if they trying out the new features but not switch planner.
> 
> SQL CLI is a very important entrypoint for trying out new feautures and
> prototyping for users.
> In order to give new planner more exposures, I would like to suggest to set
> default planner
> for SQL Client to Blink planner before 1.10 release.
> 
> The approach is just changing the default SQL CLI yaml configuration[5]. In
> this way, the existing
> environment is still compatible and unaffected.
> 
> Changing the default planner for the whole Table API & SQL is another topic
> and is out of scope of this discussion.
> 
> What do you think?
> 
> Best,
> Jark
> 
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/streaming/joins.html#join-with-a-temporal-table
> [2]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#top-n
> [3]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/queries.html#deduplication
> [4]:
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/tuning/streaming_aggregation_optimization.html
> [5]:
> https://github.com/apache/flink/blob/master/flink-table/flink-sql-client/conf/sql-client-defaults.yaml#L100



[jira] [Created] (FLINK-15429) read hive table null value of timestamp type will throw an npe

2019-12-27 Thread Terry Wang (Jira)
Terry Wang created FLINK-15429:
--

 Summary: read hive table null value of timestamp type will throw 
an npe
 Key: FLINK-15429
 URL: https://issues.apache.org/jira/browse/FLINK-15429
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive
Affects Versions: 1.10.0
Reporter: Terry Wang
 Fix For: 1.10.0


When there is null value of timestamp type in hive table, will have exception 
like following:


Caused by: org.apache.flink.table.api.TableException: Exception in writeRecord
at 
org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:122)
at 
org.apache.flink.streaming.api.functions.sink.OutputFormatSinkFunction.invoke(OutputFormatSinkFunction.java:87)
at 
org.apache.flink.streaming.api.functions.sink.SinkFunction.invoke(SinkFunction.java:52)
at 
org.apache.flink.streaming.api.operators.StreamSink.processElement(StreamSink.java:56)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at SinkConversion$1.processElement(Unknown Source)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.pushToOperator(OperatorChain.java:550)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:527)
at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$ChainingOutput.collect(OperatorChain.java:487)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:730)
at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:708)
at 
org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
at 
org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:93)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:100)
at 
org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:63)
at 
org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:196)
Caused by: java.lang.NullPointerException
at 
org.apache.flink.table.catalog.hive.client.HiveShimV100.ensureSupportedFlinkTimestamp(HiveShimV100.java:386)
at 
org.apache.flink.table.catalog.hive.client.HiveShimV100.toHiveTimestamp(HiveShimV100.java:357)
at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$b054b59b$1(HiveInspectors.java:216)
at 
org.apache.flink.table.functions.hive.conversion.HiveInspectors.lambda$getConversion$7f882244$1(HiveInspectors.java:172)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.getConvertedRow(HiveOutputFormatFactory.java:190)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:206)
at 
org.apache.flink.connectors.hive.HiveOutputFormatFactory$HiveOutputFormat.writeRecord(HiveOutputFormatFactory.java:178)
at 
org.apache.flink.table.filesystem.SingleDirectoryWriter.write(SingleDirectoryWriter.java:52)
at 
org.apache.flink.table.filesystem.FileSystemOutputFormat.writeRecord(FileSystemOutputFormat.java:120)
... 19 more



We should add null check in HiveShim100#ensureSupportedFlinkTimestamp and 
return a prper value.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15398) Correct catalog doc example mistake

2019-12-25 Thread Terry Wang (Jira)
Terry Wang created FLINK-15398:
--

 Summary: Correct catalog doc example mistake
 Key: FLINK-15398
 URL: https://issues.apache.org/jira/browse/FLINK-15398
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.10.0
Reporter: Terry Wang


https://ci.apache.org/projects/flink/flink-docs-master/dev/table/catalogs.html#how-to-create-and-register-flink-tables-to-catalog
Now we don't support `show tables` through TableEnvironemt.sqlQuery() method, 
we should correct it .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] have separate Flink distributions with built-in Hive dependencies

2019-12-13 Thread Terry Wang
Hi Bowen~

Thanks for driving on this. I have tried using sql client with hive connector 
about two weeks ago, it’s painful to set up the environment from my experience.
+ 1 for this proposal.

Best,
Terry Wang



> 2019年12月13日 16:44,Bowen Li  写道:
> 
> Hi all,
> 
> I want to propose to have a couple separate Flink distributions with Hive
> dependencies on specific Hive versions (2.3.4 and 1.2.1). The distributions
> will be provided to users on Flink download page [1].
> 
> A few reasons to do this:
> 
> 1) Flink-Hive integration is important to many many Flink and Hive users in
> two dimensions:
> a) for Flink metadata: HiveCatalog is the only persistent catalog to
> manage Flink tables. With Flink 1.10 supporting more DDL, the persistent
> catalog would be playing even more critical role in users' workflow
> b) for Flink data: Hive data connector (source/sink) helps both Flink
> and Hive users to unlock new use cases in streaming, near-realtime/realtime
> data warehouse, backfill, etc.
> 
> 2) currently users have to go thru a *really* tedious process to get
> started, because it requires lots of extra jars (see [2]) that are absent
> in Flink's lean distribution. We've had so many users from public mailing
> list, private email, DingTalk groups who got frustrated on spending lots of
> time figuring out the jars themselves. They would rather have a more "right
> out of box" quickstart experience, and play with the catalog and
> source/sink without hassle.
> 
> 3) it's easier for users to replace those Hive dependencies for their own
> Hive versions - just replace those jars with the right versions and no need
> to find the doc.
> 
> * Hive 2.3.4 and 1.2.1 are two versions that represent lots of user base
> out there, and that's why we are using them as examples for dependencies in
> [1] even though we've supported almost all Hive versions [3] now.
> 
> I want to hear what the community think about this, and how to achieve it
> if we believe that's the way to go.
> 
> Cheers,
> Bowen
> 
> [1] https://flink.apache.org/downloads.html
> [2]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#dependencies
> [3]
> https://ci.apache.org/projects/flink/flink-docs-master/dev/table/hive/#supported-hive-versions



[jira] [Created] (FLINK-15242) Add doc to introduce ddls or dmls supported by sql cli

2019-12-13 Thread Terry Wang (Jira)
Terry Wang created FLINK-15242:
--

 Summary: Add doc to introduce ddls or dmls supported by sql cli
 Key: FLINK-15242
 URL: https://issues.apache.org/jira/browse/FLINK-15242
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Affects Versions: 1.10.0
Reporter: Terry Wang
 Fix For: 1.10.0


Now in the document of sql client
https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sqlClient.html,
 there isn't a part to introduce the ddls/dmls in a whole story. We should 
complete it before the 1.10 release.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15148) Add doc for create/drop/alter database ddl

2019-12-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15148:
--

 Summary: Add doc for create/drop/alter database ddl
 Key: FLINK-15148
 URL: https://issues.apache.org/jira/browse/FLINK-15148
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15147) Add doc for alter table set properties and rename table

2019-12-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-15147:
--

 Summary: Add doc for alter table set properties and rename table
 Key: FLINK-15147
 URL: https://issues.apache.org/jira/browse/FLINK-15147
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15114) Add execute result info for alter/create/drop database in sql client.

2019-12-06 Thread Terry Wang (Jira)
Terry Wang created FLINK-15114:
--

 Summary: Add execute result info for alter/create/drop database in 
sql client.
 Key: FLINK-15114
 URL: https://issues.apache.org/jira/browse/FLINK-15114
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang
 Fix For: 1.10.0


Add execute result info for alter/create/drop database in sql-client



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15061) create/alter table/databases properties should be case sensitive stored in catalog

2019-12-04 Thread Terry Wang (Jira)
Terry Wang created FLINK-15061:
--

 Summary: create/alter table/databases properties should be case 
sensitive stored in catalog
 Key: FLINK-15061
 URL: https://issues.apache.org/jira/browse/FLINK-15061
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API
Reporter: Terry Wang
 Fix For: 1.10.0


Now in the class `SqlToOperationConverter`, when creating a table the logic 
will convert all properties key to lower format, which will cause the 
properties stored in catalog to lose the case style and not intuitively be 
observed to user.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-15005) Change CatalogTableStats.UNKNOW and HiveStatsUtil stats default value

2019-12-01 Thread Terry Wang (Jira)
Terry Wang created FLINK-15005:
--

 Summary: Change CatalogTableStats.UNKNOW and HiveStatsUtil stats 
default value
 Key: FLINK-15005
 URL: https://issues.apache.org/jira/browse/FLINK-15005
 Project: Flink
  Issue Type: Bug
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14965) CatalogTableStatistics UNKNOWN should be consistent with TableStats UNKNOWN

2019-11-26 Thread Terry Wang (Jira)
Terry Wang created FLINK-14965:
--

 Summary: CatalogTableStatistics UNKNOWN should be consistent with 
TableStats UNKNOWN
 Key: FLINK-14965
 URL: https://issues.apache.org/jira/browse/FLINK-14965
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.10.0
Reporter: Terry Wang


UNKNOWN stats in
` org.apache.flink.table.catalog.stats

public class CatalogTableStatistics {
public static final CatalogTableStatistics UNKNOWN = new 
CatalogTableStatistics(0, 0, 0, 0);
`
and 
`
org.apache.flink.table.plan.stats
public final class TableStats {
public static final TableStats UNKNOWN = new TableStats(-1, new 
HashMap<>());
`
are not consistent which will cause some cbo unexpect behavior





--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14932) Support table related DDLs that needs return value in TableEnvironment

2019-11-22 Thread Terry Wang (Jira)
Terry Wang created FLINK-14932:
--

 Summary: Support table related DDLs that needs return value in 
TableEnvironment
 Key: FLINK-14932
 URL: https://issues.apache.org/jira/browse/FLINK-14932
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showTablesStatement:
SHOW TABLES

2. descTableStatement:
DESCRIBE [ EXTENDED]  [[catalogName.] dataBasesName].tableName



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14879) Support database related DDLs that needs return value in TableEnvironment

2019-11-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-14879:
--

 Summary: Support database related DDLs that needs return value in 
TableEnvironment
 Key: FLINK-14879
 URL: https://issues.apache.org/jira/browse/FLINK-14879
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showDatabasesStatement:
SHOW DATABASES

2. descDatabaseStatement:
DESCRIBE  DATABASE [ EXTENDED] [ catalogName.] dataBasesName

Above statements should be supported in TableEnvironment after flip-84 completed



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14878) Support `use catalog` through sqlUpdate() method in TableEnvironment

2019-11-20 Thread Terry Wang (Jira)
Terry Wang created FLINK-14878:
--

 Summary: Support `use  catalog` through sqlUpdate() method in 
TableEnvironment
 Key: FLINK-14878
 URL: https://issues.apache.org/jira/browse/FLINK-14878
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API
Reporter: Terry Wang


Support `USE CATALOG catalogName` through `sqlUpdate()` method in 
TableEnvironment



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] FLIP-86: Improve Connector Properties

2019-11-20 Thread Terry Wang
+1 (non-binding)

Best,
Terry Wang



> 2019年11月20日 17:47,Dawid Wysakowicz  写道:
> 
> +1 from my side
> 
> Best,
> 
> Dawid
> 
> On 20/11/2019 10:36, Jark Wu wrote:
>> Hi everyone,
>> 
>> I would like to start a vote on FLIP-86. The discussion seems to have
>> reached an agreement.
>> 
>> Please vote for the following design document:
>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-86%3A+Improve+Connector+Properties
>> 
>> The discussion can be found at:
>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-86-Improve-Connector-Properties-td34922.html
>> 
>> This voting will be open for at least 72 hours.
>> 
>> Best,
>> Jark
>> 
> 



[jira] [Created] (FLINK-14721) HiveTableSource should implement LimitableTableSource interface

2019-11-12 Thread Terry Wang (Jira)
Terry Wang created FLINK-14721:
--

 Summary: HiveTableSource should implement LimitableTableSource 
interface
 Key: FLINK-14721
 URL: https://issues.apache.org/jira/browse/FLINK-14721
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Hive
Reporter: Terry Wang


Now the HiveTableSource don't support LimitableTableSource which will cause 
huge resource and time waste  in queries like `select * from hiveSourceTable 
limit 10` 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14692) Support Table related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14692:
--

 Summary: Support Table related DDLs in TableEnvironment
 Key: FLINK-14692
 URL: https://issues.apache.org/jira/browse/FLINK-14692
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14691) Support database related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14691:
--

 Summary: Support database related DDLs in TableEnvironment
 Key: FLINK-14691
 URL: https://issues.apache.org/jira/browse/FLINK-14691
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14690) Support catalog related DDLs in TableEnvironment

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14690:
--

 Summary: Support catalog related DDLs in TableEnvironment
 Key: FLINK-14690
 URL: https://issues.apache.org/jira/browse/FLINK-14690
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14689) Add catalog related DDLs support in SQL Parser

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14689:
--

 Summary: Add catalog related DDLs support in SQL Parser
 Key: FLINK-14689
 URL: https://issues.apache.org/jira/browse/FLINK-14689
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang


1. showCatalogsStatement
SHOW CATALOGS

2. describeCatalogStatement
DESCRIBE CATALOG catalogName

3.useCatalogStatement
USE CATALOG catalogName 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14688) Add table related

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14688:
--

 Summary: Add table related
 Key: FLINK-14688
 URL: https://issues.apache.org/jira/browse/FLINK-14688
 Project: Flink
  Issue Type: Sub-task
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14687) Add database related ddl support to FLINK-SQL-Parser

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14687:
--

 Summary: Add database related ddl support to FLINK-SQL-Parser
 Key: FLINK-14687
 URL: https://issues.apache.org/jira/browse/FLINK-14687
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / API, Table SQL / Client
Reporter: Terry Wang


According to FLIP-69, we should introduce such following DDLs related to the 
database in the sql parser.

1. createDatabaseStatement:
CREATE  DATABASE [ IF NOT EXISTS ] [ catalogName.] dataBaseName
[ COMMENT database_comment ]
[WITH ( name=value [, name=value]*)]

2. dropDatabaseStatement:
DROP  DATABASE [ IF EXISTS ] [ catalogName.] dataBaseName
[ (RESTRICT|CASCADE)]

3. alterDatabaseStatement:
ALTER  DATABASE  [ catalogName.] dataBaseName SET
( name=value [, name=value]*)

4. useDatabaseStatement:
USE [ catalogName.] dataBaseName 

5. showDatabasesStatement:
SHOW DATABASES

6. descDatabaseStatement:
DESCRIBE  DATABASE [ EXTENDED] [ catalogName.] dataBasesName 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-14686) Flink SQL DDL Enhancement

2019-11-09 Thread Terry Wang (Jira)
Terry Wang created FLINK-14686:
--

 Summary: Flink SQL DDL Enhancement
 Key: FLINK-14686
 URL: https://issues.apache.org/jira/browse/FLINK-14686
 Project: Flink
  Issue Type: New Feature
  Components: Table SQL / API, Table SQL / Client
Affects Versions: 1.10.0
Reporter: Terry Wang


We would like to achieve the following goals in this FLIP-69.

 - Add Catalog DDL enhancement support
 - Add Database DDL enhancement support
 - Add Table DDL enhancement support

This is the parent Jira for subtasks.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[RESULT][VOTE] FLIP-69: Flink SQL DDL Enhancement

2019-11-09 Thread Terry Wang
Hi all,

The voting time for FLIP-69 has passed. I'm closing the vote now.

There were seven +1 votes, 3 of which are binding:
- Bowen Li (binding)
- Jark Wu (binding)
- Kurt Young (binding)

- Xuefu Z (non-binding)
- Peter Huang (non-binding)
- Jingsong Li (non-binding)
- Danny Chan (non-binding)

There were no disapproving votes.

Thus, FLIP-69 has been accepted.

Thanks everyone for joining the discussion and giving feedback!
Best,
Terry Wang



> 2019年11月9日 23:49,Jark Wu  写道:
> 
> +1 from my side. They are useful features.
> 
> Best,
> Jark
> 
> On Fri, 8 Nov 2019 at 16:42, Danny Chan  wrote:
> 
>> +1(non-binding), nice job, Terry ~
>> 
>> Best,
>> Danny Chan
>> 在 2019年11月5日 +0800 PM10:32,dev@flink.apache.org,写道:
>>> 
>>> +1 to the long missing feature in Flink SQL.
>> 



Re: [ANNOUNCE] Jark Wu is now part of the Flink PMC

2019-11-08 Thread Terry Wang
Well deserved!
Congratulations, Jark!

Best,
Terry Wang



> 2019年11月8日 17:54,Dian Fu  写道:
> 
> Hi Jark,
> 
> Congrats. Well deserved!
> 
> Regards,
> Dian
> 
>> 在 2019年11月8日,下午5:51,jincheng sun  写道:
>> 
>> Hi all,
>> 
>> On behalf of the Flink PMC, I'm happy to announce that Jark Wu is now
>> part of the Apache Flink Project Management Committee (PMC).
>> 
>> Jark has been a committer since February 2017. He has been very active on
>> Flink's Table API / SQL component, as well as frequently helping
>> manage/verify/vote releases. He has been writing many blogs about Flink,
>> also driving the translation work of Flink website and documentation. He is
>> very active in China community as he gives talks about Flink at many events
>> in China.
>> 
>> Congratulations & Welcome Jark!
>> 
>> Best,
>> Jincheng (on behalf of the Flink PMC)
> 



Re: [VOTE] FLIP-79: Flink Function DDL Support (1.10 Release Feature Only)

2019-11-08 Thread Terry Wang
Thanks Peter driving on this. LGTM for 1.10 release feature.

+1 from my side. (non-binding)

Best,
Terry Wang



> 2019年11月8日 13:20,Peter Huang  写道:
> 
> Dear All,
> 
> I would like to start the vote for 1.10 release features in FLIP-79 [1]
> which is discussed and research consensus in the discussion thread [2]. For
> the advanced feature, such as loading function from remote resources,
> support scala/python function, we will have the further discussion after
> release 1.10.
> 
> The vote will be open for at least 72 hours. If the voting passes, I will
> close it by 2019-11-10 14:00 UTC.
> 
> [1]
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-79+Flink+Function+DDL+Support
> [2]
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Discussion-FLIP-79-Flink-Function-DDL-Support-td33965.html
> 
> Best Regards
> Peter Huang



Re: [DISCUSS] FLIP 69 - Flink SQL DDL Enhancement

2019-11-07 Thread Terry Wang
Hi, Kurt~

Thanks for your vote and pointing out some deficiency of this flip. I’ll try to 
avoid making similar mistakes.

Best,
Terry Wang



> 2019年11月8日 11:28,Kurt Young  写道:
> 
> Hi,
> 
> Sorry to join this so late and thanks for proposing this FLIP. After
> going through the proposal details, I would +1 for the changes.
> 
> However, the FLIP name is kind of confusing me. It says will do
> DDL enhancement, and picked up a few new features to do. It looks
> to me the goal and content of this FLIP is kind of random.
> 
> Each topic of this FLIP touched is super big, e.g. to enhance
> alter table command. According to SQL 2011 standard, it would contains
> at least so many features like:
> 
>  ::=
>  ALTER TABLE  
>  ::=
>
>  | 
>  | 
>  | 
>  | 
>  | 
>  | 
>  | 
>  | 
>  | 
> 
> I'm not suggesting to do all these at once, but I also didn't see any
> future plan or goals in the FLIP to describe the full picture here. We just
> picked up some random chosen features to start.
> 
> But still I'm +1 to this FLIP since they are all good enhancements.
> 
> Best,
> Kurt
> 
> 
> On Tue, Nov 5, 2019 at 10:32 PM Terry Wang  <mailto:zjuwa...@gmail.com>> wrote:
> 
>> Hi Bowen~
>> 
>> We don’t intend to support create/drop catalog  syntax in this flip, we
>> may support it if there indeed has a strong desire.
>> And I’m going to kick off a vote for this flip, feel free to review again.
>> 
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 2019年9月26日 00:44,Xuefu Z  写道:
>>> 
>>> Actually catalogs are more of system settings than of user objects that a
>>> user might create or drop constantly. Thus, it's probably sufficient to
>> set
>>> up catalog information in the config file, at least for now.
>>> 
>>> Thanks,
>>> Xuefu
>>> 
>>> On Tue, Sep 24, 2019 at 7:10 PM Terry Wang >> <mailto:zjuwa...@gmail.com> > zjuwa...@gmail.com <mailto:zjuwa...@gmail.com>>> wrote:
>>> 
>>>> Thanks Bowen for your insightful comments, I’ll think twice and do
>>>> corresponding improvement.
>>>> After finished, I’ll update in this mailing thread again.
>>>> Best,
>>>> Terry Wang
>>>> 
>>>> 
>>>> 
>>>>> 在 2019年9月25日,上午8:28,Bowen Li >>>> <mailto:bowenl...@gmail.com>> 写道:
>>>>> 
>>>>> BTW, will there be a "CREATE/DROP CATALOG" DDL?
>>>>> 
>>>>> Though it's not SQL standard, I can see it'll be useful and handy for
>>>> our end users in many cases.
>>>>> 
>>>>> On Mon, Sep 23, 2019 at 12:28 PM Bowen Li >>>> <mailto:bowenl...@gmail.com>
>> <mailto:bowenl...@gmail.com <mailto:bowenl...@gmail.com>> >>> bowenl...@gmail.com <mailto:bowenl...@gmail.com> 
>>>> <mailto:bowenl...@gmail.com <mailto:bowenl...@gmail.com>>>> wrote:
>>>>> Hi Terry,
>>>>> 
>>>>> Thanks for driving the effort! I left some comments in the doc.
>>>>> 
>>>>> AFAIU, the biggest motivation is to support DDLs in sql parser so that
>>>> both Table API and SQL CLI can share the stack, despite that SQL CLI has
>>>> already supported some commands itself. However, I don't see details on
>> how
>>>> SQL CLI would migrate and depend on sql parser, and how Table API and
>> SQL
>>>> CLI would actually share SQL parser. I'm not sure yet how much work that
>>>> will take, just want to double check that you didn't include them
>> because
>>>> they are very trivial according to your estimate?
>>>>> 
>>>>> 
>>>>> On Mon, Sep 16, 2019 at 1:46 AM Terry Wang >>>> <mailto:zjuwa...@gmail.com>
>> <mailto:zjuwa...@gmail.com <mailto:zjuwa...@gmail.com>> >>> zjuwa...@gmail.com <mailto:zjuwa...@gmail.com> <mailto:zjuwa...@gmail.com 
>>>> <mailto:zjuwa...@gmail.com>>>> wrote:
>>>>> Hi everyone,
>>>>> 
>>>>> In flink 1.9, we have introduced some awesome features such as complete
>>>> catalog support[1] and sql ddl support[2]. These features have been a
>>>> critical integration for Flink to be able to manage data and metadata
>> like
>>>> a classic RDBMS and make developers more easy to construct their
>>>> real-time/off-line 

Re: [VOTE] FLIP-59: Enable execution configuration from Configuration object

2019-11-07 Thread Terry Wang
Thanks for driving on this.
+1 from my side (non-binding) 
Best,
Terry Wang



> 2019年11月7日 17:34,Dawid Wysakowicz  写道:
> 
> Thank you tison. You are right. I did not update the hyperlinks. Sorry
> for that. Once again then:
> 
> please vote for FLIP-59
> https://cwiki.apache.org/confluence/display/FLINK/FLIP-59%3A+Enable+execution+configuration+from+Configuration+object.
> 
> 
> The discussion thread can be found here
> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-59-Enable-execution-configuration-from-Configuration-object-td32359.html
> 
> This vote will be open for at least 72 hours and requires consensus to
> be accepted.
> 
> Best, Dawid
> 
> On 07/11/2019 10:29, tison wrote:
>> Hi Dawid,
>> 
>> I'm afraid that you list the wrong FLIP page. Although the content is
>> FLIP-59 but it directs to FLIP-67.
>> 
>> Best,
>> tison.
>> 
>> 
>> Dawid Wysakowicz  于2019年11月7日周四 下午5:04写道:
>> 
>>> Hello,
>>> 
>>> please vote for FLIP-59
>>> <https://cwiki.apache.org/confluence/display/FLINK/FLIP-59%3A+Enable+execution+configuration+from+Configuration+object>
>>> <https://cwiki.apache.org/confluence/display/FLINK/FLIP-67%3A+Cluster+partitions+lifecycle>.
>>> 
>>> 
>>> The discussion thread can be found here <
>>> <https://mail-archives.apache.org/mod_mbox/flink-dev/201909.mbox/%3C58db5a13-2b27-e853-bbf1-ccbf404abc7e%40apache.org%3E>
>>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-59-Enable-execution-configuration-from-Configuration-object-td32359.html
>>> <https://mail-archives.apache.org/mod_mbox/flink-dev/201909.mbox/%3C58db5a13-2b27-e853-bbf1-ccbf404abc7e%40apache.org%3E>.
>>> 
>>> 
>>> This vote will be open for at least 72 hours and requires consensus to be
>>> accepted.
>>> 
>>> Best,
>>> Dawid
>>> 
> 



Re: [VOTE] FLIP-69: Flink SQL DDL Enhancement

2019-11-07 Thread Terry Wang
Hi Rui~
What you suggested makes sense, remove description and detailed description 
from `DESCRIBE DATABASE`.
Open to more comments and votes :)

Best,
Terry Wang



> 2019年11月7日 17:15,Rui Li  写道:
> 
> I see, thanks for the clarification. In current implementation, it seems
> just a duplicate of comment. So I'd prefer not to display it for DESCRIBE
> DATABASE, because 1) users have no control over the content and 2) it's
> totally redundant. We can add it in the future when we come up with
> something more meaningful. What do you think?
> 
> On Thu, Nov 7, 2019 at 3:54 PM Terry Wang  wrote:
> 
>> Hi Rui~
>> 
>> Description of the database is obtained from
>> `CatalogDatabase#getDescription()` method, which is implement by
>> CatalogDatebaseImpl. Users don’t need to specify the description.
>> 
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 2019年11月7日 15:40,Rui Li  写道:
>>> 
>>> Thanks Terry for driving this forward.
>>> Got one question about DESCRIBE DATABASE: the results display comment and
>>> description of a database. While comment can be specified when a database
>>> is created, I don't see how users can specify description of the
>> database?
>>> 
>>> On Thu, Nov 7, 2019 at 4:16 AM Bowen Li  wrote:
>>> 
>>>> Thanks.
>>>> 
>>>> As Terry and I discussed offline yesterday, we added a new section to
>>>> explain the detailed implementation plan.
>>>> 
>>>> +1 (binding) from me.
>>>> 
>>>> Bowen
>>>> 
>>>> On Tue, Nov 5, 2019 at 6:33 PM Terry Wang  wrote:
>>>> 
>>>>> Hi Bowen:
>>>>> Thanks for your feedback.
>>>>> Your opinion convinced me and I just remove the section about catalog
>>>>> create statement and also remove `DBPROPERTIES` `PROPERTIES` from alter
>>>>> DDLs.
>>>>> Open to more comments or votes :) !
>>>>> 
>>>>> Best,
>>>>> Terry Wang
>>>>> 
>>>>> 
>>>>> 
>>>>>> 2019年11月6日 07:22,Bowen Li  写道:
>>>>>> 
>>>>>> Hi Terry,
>>>>>> 
>>>>>> I went over the FLIP in detail again. The FLIP mostly LGTM. A couple
>>>>> issues:
>>>>>> 
>>>>>> - since we on't plan to support catalog ddl, can you remove them from
>>>> the
>>>>>> FLIP?
>>>>>> - I found there are some discrepancies in proposed database and table
>>>>> DDLs.
>>>>>> For db ddl, the create db syntax proposes specifying k-v properties
>>>>>> following "WITH". However, alter db ddl comes with a keyword
>>>>> "DBPROPERTIES":
>>>>>> 
>>>>>> CREATE  DATABASE [ IF NOT EXISTS ] [ catalogName.] dataBaseName [
>>>> COMMENT
>>>>>> database_comment ]
>>>>>> [*WITH *( name=value [, name=value]*)]
>>>>>> 
>>>>>> 
>>>>>> ALTER  DATABASE  [ catalogName.] dataBaseName SET *DBPROPERTIES* (
>>>>>> name=value [, name=value]*)
>>>>>> 
>>>>>> 
>>>>>>  IIUIC, are you borrowing syntax from Hive? Note that Hive's db
>>>> create
>>>>>> ddl comes with "DBPROPERTIES" though - "CREATE (DATABASE|SCHEMA) [IF
>>>> NOT
>>>>>> EXISTS] database_name ...  [*WITH DBPROPERTIES* (k=v, ...)];" [1]
>>>>>> 
>>>>>> The same applies to table ddl. The proposed alter table ddl comes
>>>> with
>>>>>> "SET *PROPERTIES* (...)", however, Flink's existing table create ddl
>>>>> since
>>>>>> 1.9 [2] doesn't have "PROPERTIES" keyword. As opposed to Hive's
>> syntax,
>>>>>> both create and alter table ddl comes with "TBLPROPERTIES" [1].
>>>>>> 
>>>>>> I feel it's better to be consistent among our DDLs. One option is to
>>>>>> just remove the "PROPERTIES" and "DBPROPERTIES" keywords in proposed
>>>>> syntax.
>>>>>> 
>>>>>> [1]
>>>> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
>>>>>> [2]
>>>>>> 
>>>>> 
>>>> 
>> https://ci.apache.org/project

Re: [VOTE] FLIP-69: Flink SQL DDL Enhancement

2019-11-06 Thread Terry Wang
Hi Rui~

Description of the database is obtained from `CatalogDatabase#getDescription()` 
method, which is implement by CatalogDatebaseImpl. Users don’t need to specify 
the description.

Best,
Terry Wang



> 2019年11月7日 15:40,Rui Li  写道:
> 
> Thanks Terry for driving this forward.
> Got one question about DESCRIBE DATABASE: the results display comment and
> description of a database. While comment can be specified when a database
> is created, I don't see how users can specify description of the database?
> 
> On Thu, Nov 7, 2019 at 4:16 AM Bowen Li  wrote:
> 
>> Thanks.
>> 
>> As Terry and I discussed offline yesterday, we added a new section to
>> explain the detailed implementation plan.
>> 
>> +1 (binding) from me.
>> 
>> Bowen
>> 
>> On Tue, Nov 5, 2019 at 6:33 PM Terry Wang  wrote:
>> 
>>> Hi Bowen:
>>> Thanks for your feedback.
>>> Your opinion convinced me and I just remove the section about catalog
>>> create statement and also remove `DBPROPERTIES` `PROPERTIES` from alter
>>> DDLs.
>>> Open to more comments or votes :) !
>>> 
>>> Best,
>>> Terry Wang
>>> 
>>> 
>>> 
>>>> 2019年11月6日 07:22,Bowen Li  写道:
>>>> 
>>>> Hi Terry,
>>>> 
>>>> I went over the FLIP in detail again. The FLIP mostly LGTM. A couple
>>> issues:
>>>> 
>>>> - since we on't plan to support catalog ddl, can you remove them from
>> the
>>>> FLIP?
>>>> - I found there are some discrepancies in proposed database and table
>>> DDLs.
>>>> For db ddl, the create db syntax proposes specifying k-v properties
>>>> following "WITH". However, alter db ddl comes with a keyword
>>> "DBPROPERTIES":
>>>> 
>>>> CREATE  DATABASE [ IF NOT EXISTS ] [ catalogName.] dataBaseName [
>> COMMENT
>>>> database_comment ]
>>>> [*WITH *( name=value [, name=value]*)]
>>>> 
>>>> 
>>>> ALTER  DATABASE  [ catalogName.] dataBaseName SET *DBPROPERTIES* (
>>>> name=value [, name=value]*)
>>>> 
>>>> 
>>>>   IIUIC, are you borrowing syntax from Hive? Note that Hive's db
>> create
>>>> ddl comes with "DBPROPERTIES" though - "CREATE (DATABASE|SCHEMA) [IF
>> NOT
>>>> EXISTS] database_name ...  [*WITH DBPROPERTIES* (k=v, ...)];" [1]
>>>> 
>>>>  The same applies to table ddl. The proposed alter table ddl comes
>> with
>>>> "SET *PROPERTIES* (...)", however, Flink's existing table create ddl
>>> since
>>>> 1.9 [2] doesn't have "PROPERTIES" keyword. As opposed to Hive's syntax,
>>>> both create and alter table ddl comes with "TBLPROPERTIES" [1].
>>>> 
>>>>  I feel it's better to be consistent among our DDLs. One option is to
>>>> just remove the "PROPERTIES" and "DBPROPERTIES" keywords in proposed
>>> syntax.
>>>> 
>>>> [1]
>> https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
>>>> [2]
>>>> 
>>> 
>> https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/sql.html#specifying-a-ddl
>>>> 
>>>> On Tue, Nov 5, 2019 at 12:54 PM Peter Huang <
>> huangzhenqiu0...@gmail.com>
>>>> wrote:
>>>> 
>>>>> +1 for the enhancement.
>>>>> 
>>>>> On Tue, Nov 5, 2019 at 11:04 AM Xuefu Z  wrote:
>>>>> 
>>>>>> +1 to the long missing feature in Flink SQL.
>>>>>> 
>>>>>> On Tue, Nov 5, 2019 at 6:32 AM Terry Wang 
>> wrote:
>>>>>> 
>>>>>>> Hi all,
>>>>>>> 
>>>>>>> I would like to start the vote for FLIP-69[1] which is discussed and
>>>>>>> reached consensus in the discussion thread[2].
>>>>>>> 
>>>>>>> The vote will be open for at least 72 hours. I'll try to close it by
>>>>>>> 2019-11-08 14:30 UTC, unless there is an objection or not enough
>>> votes.
>>>>>>> 
>>>>>>> [1]
>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
>>>>>>> <
>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
>>>>>>>> 
>>>>>>> [2]
>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
>>>>>>> <
>>>>>>> 
>>>>>> 
>>>>> 
>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
>>>>>>>> 
>>>>>>> Best,
>>>>>>> Terry Wang
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Xuefu Zhang
>>>>>> 
>>>>>> "In Honey We Trust!"
>>>>>> 
>>>>> 
>>> 
>>> 
>> 
> 
> 
> -- 
> Best regards!
> Rui Li



Re: [DISCUSS] FLIP-84: Improve & Refactor execute/sqlQuery/sqlUpdate APIS of TableEnvironment

2019-11-06 Thread Terry Wang
Hi Jark,

Thanks for your suggestion!
Change the title and wait for more comments.

Best,
Terry Wang



> 2019年11月6日 15:52,Jark Wu  写道:
> 
> Hi Terry,
> 
> I would suggest to change the title a bit.
> For example, "Improve & Refactor TableEnvironment APIs".
> Or more specifically, "Improve & Refactor TableEnvironment
> execute/sqlQuery/sqlUpdate.. APIs"
> 
> Currently, the title is a little wide (there are so many APIs in table
> module) .
> Make the title more specifically can attract more people who care about it.
> 
> Best,
> Jark
> 
> 
> 
> On Tue, 5 Nov 2019 at 14:51, Kurt Young  wrote:
> 
>> cc @Fabian here, thought you might be interesting to review this.
>> 
>> Best,
>> Kurt
>> 
>> 
>> On Thu, Oct 31, 2019 at 1:39 PM Kurt Young  wrote:
>> 
>>> Thanks Terry for bringing this up. TableEnv's interface is really
>> critical
>>> not only
>>> to users, but also for components built upon it like SQL CLI. Your
>>> proposal
>>> solved some pain points we currently have, so +1 to the proposal.
>>> 
>>> I left some comments in the document.
>>> 
>>> Best,
>>> Kurt
>>> 
>>> 
>>> On Thu, Oct 31, 2019 at 10:38 AM Terry Wang  wrote:
>>> 
>>>> Hi everyone,
>>>> 
>>>> TableEnvironment has provided two `Table sqlQuery(String sql)` and `void
>>>> sqlUpdate(String sql)` interfaces to create a table(actually a view
>> here)
>>>> or describe an update action from one sql string.
>>>> But with more use cases come up, there are some fatal shortcomings in
>>>> current API design. Such as  `sqlUpdate()` don’t support get a return
>> value
>>>> and buggy support for buffer sql exception and so on.
>>>> 
>>>> So I’d like to kick off a discussion on improvement and refactor the api
>>>> of table module:
>>>> 
>>>> google doc:
>>>> 
>> https://docs.google.com/document/d/19-mdYJjKirh5aXCwq1fDajSaI09BJMMT95wy_YhtuZk/edit?usp=sharing
>>>> <
>>>> 
>> https://docs.google.com/document/d/19-mdYJjKirh5aXCwq1fDajSaI09BJMMT95wy_YhtuZk/edit?usp=sharing
>>>>> 
>>>> Flip link:
>>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
>>>> <
>>>> 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878
>>>>> 
>>>> 
>>>> In short, it:
>>>>- Discuss buffering sql execute problem
>>>>- Discuss current `sqlQuery/sqlUpdate` and propose two new api
>>>>- Introduce one new `executeBatch` method to support batch sql
>>>> execute
>>>>- Discuss how SQL CLI should deal with multiple statements
>>>> 
>>>> Looking forward to all your guys comments.
>>>> 
>>>> Best,
>>>> Terry Wang
>>>> 
>>>> 
>>>> 
>>>> 
>> 



Re: [VOTE] FLIP-69: Flink SQL DDL Enhancement

2019-11-05 Thread Terry Wang
Hi Bowen:
Thanks for your feedback. 
Your opinion convinced me and I just remove the section about catalog create 
statement and also remove `DBPROPERTIES` `PROPERTIES` from alter DDLs.
Open to more comments or votes :) !

Best,
Terry Wang



> 2019年11月6日 07:22,Bowen Li  写道:
> 
> Hi Terry,
> 
> I went over the FLIP in detail again. The FLIP mostly LGTM. A couple issues:
> 
> - since we on't plan to support catalog ddl, can you remove them from the
> FLIP?
> - I found there are some discrepancies in proposed database and table DDLs.
>  For db ddl, the create db syntax proposes specifying k-v properties
> following "WITH". However, alter db ddl comes with a keyword "DBPROPERTIES":
> 
> CREATE  DATABASE [ IF NOT EXISTS ] [ catalogName.] dataBaseName [ COMMENT
> database_comment ]
> [*WITH *( name=value [, name=value]*)]
> 
> 
> ALTER  DATABASE  [ catalogName.] dataBaseName SET *DBPROPERTIES* (
> name=value [, name=value]*)
> 
> 
>IIUIC, are you borrowing syntax from Hive? Note that Hive's db create
> ddl comes with "DBPROPERTIES" though - "CREATE (DATABASE|SCHEMA) [IF NOT
> EXISTS] database_name ...  [*WITH DBPROPERTIES* (k=v, ...)];" [1]
> 
>   The same applies to table ddl. The proposed alter table ddl comes with
> "SET *PROPERTIES* (...)", however, Flink's existing table create ddl since
> 1.9 [2] doesn't have "PROPERTIES" keyword. As opposed to Hive's syntax,
> both create and alter table ddl comes with "TBLPROPERTIES" [1].
> 
>   I feel it's better to be consistent among our DDLs. One option is to
> just remove the "PROPERTIES" and "DBPROPERTIES" keywords in proposed syntax.
> 
> [1] https://cwiki.apache.org/confluence/display/Hive/LanguageManual+DDL
> [2]
> https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/sql.html#specifying-a-ddl
> 
> On Tue, Nov 5, 2019 at 12:54 PM Peter Huang 
> wrote:
> 
>> +1 for the enhancement.
>> 
>> On Tue, Nov 5, 2019 at 11:04 AM Xuefu Z  wrote:
>> 
>>> +1 to the long missing feature in Flink SQL.
>>> 
>>> On Tue, Nov 5, 2019 at 6:32 AM Terry Wang  wrote:
>>> 
>>>> Hi all,
>>>> 
>>>> I would like to start the vote for FLIP-69[1] which is discussed and
>>>> reached consensus in the discussion thread[2].
>>>> 
>>>> The vote will be open for at least 72 hours. I'll try to close it by
>>>> 2019-11-08 14:30 UTC, unless there is an objection or not enough votes.
>>>> 
>>>> [1]
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
>>>> <
>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
>>>>> 
>>>> [2]
>>>> 
>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
>>>> <
>>>> 
>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
>>>>> 
>>>> Best,
>>>> Terry Wang
>>>> 
>>>> 
>>>> 
>>>> 
>>> 
>>> --
>>> Xuefu Zhang
>>> 
>>> "In Honey We Trust!"
>>> 
>> 



[VOTE] FLIP-69: Flink SQL DDL Enhancement

2019-11-05 Thread Terry Wang
Hi all,

I would like to start the vote for FLIP-69[1] which is discussed and reached 
consensus in the discussion thread[2].

The vote will be open for at least 72 hours. I'll try to close it by 2019-11-08 
14:30 UTC, unless there is an objection or not enough votes.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement
 
<https://cwiki.apache.org/confluence/display/FLINK/FLIP+69+-+Flink+SQL+DDL+Enhancement>
[2] 
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html
 
<http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-69-Flink-SQL-DDL-Enhancement-td33090.html>
Best,
Terry Wang





Re: [DISCUSS] FLIP 69 - Flink SQL DDL Enhancement

2019-11-05 Thread Terry Wang
Hi Bowen~

We don’t intend to support create/drop catalog  syntax in this flip, we may 
support it if there indeed has a strong desire.
And I’m going to kick off a vote for this flip, feel free to review again.

Best,
Terry Wang



> 2019年9月26日 00:44,Xuefu Z  写道:
> 
> Actually catalogs are more of system settings than of user objects that a
> user might create or drop constantly. Thus, it's probably sufficient to set
> up catalog information in the config file, at least for now.
> 
> Thanks,
> Xuefu
> 
> On Tue, Sep 24, 2019 at 7:10 PM Terry Wang  <mailto:zjuwa...@gmail.com>> wrote:
> 
>> Thanks Bowen for your insightful comments, I’ll think twice and do
>> corresponding improvement.
>> After finished, I’ll update in this mailing thread again.
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 在 2019年9月25日,上午8:28,Bowen Li  写道:
>>> 
>>> BTW, will there be a "CREATE/DROP CATALOG" DDL?
>>> 
>>> Though it's not SQL standard, I can see it'll be useful and handy for
>> our end users in many cases.
>>> 
>>> On Mon, Sep 23, 2019 at 12:28 PM Bowen Li >> <mailto:bowenl...@gmail.com> > bowenl...@gmail.com <mailto:bowenl...@gmail.com>>> wrote:
>>> Hi Terry,
>>> 
>>> Thanks for driving the effort! I left some comments in the doc.
>>> 
>>> AFAIU, the biggest motivation is to support DDLs in sql parser so that
>> both Table API and SQL CLI can share the stack, despite that SQL CLI has
>> already supported some commands itself. However, I don't see details on how
>> SQL CLI would migrate and depend on sql parser, and how Table API and SQL
>> CLI would actually share SQL parser. I'm not sure yet how much work that
>> will take, just want to double check that you didn't include them because
>> they are very trivial according to your estimate?
>>> 
>>> 
>>> On Mon, Sep 16, 2019 at 1:46 AM Terry Wang >> <mailto:zjuwa...@gmail.com> > zjuwa...@gmail.com <mailto:zjuwa...@gmail.com>>> wrote:
>>> Hi everyone,
>>> 
>>> In flink 1.9, we have introduced some awesome features such as complete
>> catalog support[1] and sql ddl support[2]. These features have been a
>> critical integration for Flink to be able to manage data and metadata like
>> a classic RDBMS and make developers more easy to construct their
>> real-time/off-line warehouse or sth similar base on flink.
>>> 
>>> But there is still a lack of support on how Flink SQL DDL to manage
>> metadata and data like classic RDBMS such as `alter table rename` and so on.
>>> 
>>> So I’d like to kick off a discussion on enhancing Flink Sql Ddls:
>>> 
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>>>> 
>>> 
>>> In short, it:
>>>- Add Catalog DDL enhancement support:  show catalogs / describe
>> catalog / use catalog
>>>- Add Database DDL enhancement support:  show databses / create
>> database / drop database/ alter database
>>>- Add Table DDL enhancement support:show tables/ describe
>> table / alter table
>>>- Add Function DDL enhancement support: show functions/ create
>> function /drop function
>>> 
>>> Looking forward to your opinions.
>>> 
>>> Best,
>>> Terry Wang
>>> 
>>> 
>>> 
>>> [1]:https://issues.apache.org/jira/browse/FLINK-11275 
>>> <https://issues.apache.org/jira/browse/FLINK-11275> <
>> https://issues.apache.org/jira/browse/FLINK-11275 
>> <https://issues.apache.org/jira/browse/FLINK-11275>> <
>> https://issues.apache.org/jira/browse/FLINK-11275 
>> <https:/

Re: [Discussion] FLIP-79 Flink Function DDL Support

2019-10-31 Thread Terry Wang
Hi Peter,

I’d like to share some thoughts from mysids:
1. what's the syntax to distinguish function language ?
+1 for using `[LANGUAGE JVM|PYTHON] USING JAR`
2. How to persist function language in backend catalog ?
+ 1 for a separate field in CatalogFunction. But as to specific 
backend, we may persist it case by case. Special case includes how HiveCatalog 
store the kind of CatalogFucnction.
3. do we really need to allow users set a properties map for a udf?
There are use case requiring passing external arguments to udf for sure, 
but the need can also be met by passing arguments to `eval` when calling udf in 
sql. 
IMO, there is not much need to support set properties map for a udf.

4. Should a catalog implement to be able to decide whether it can take a 
properties map, and which language of a udf it can persist?
IMO, it’s necessary for catalog implementation to provide such information. But 
for flink 1.10 map goal, we can just skip this part.



Best,
Terry Wang



> 2019年10月30日 13:52,Peter Huang  写道:
> 
> Hi Bowen,
> 
> I can't agree more about we first have an agreement on the DDL syntax and
> focus on the MVP in the current phase.
> 
> 1) what's the syntax to distinguish function language
> Currently, there are two opinions:
> 
>   - USING 'python .'
>   - [LANGUAGE JVM|PYTHON] USING JAR '...'
> 
> As we need to support multiple resources as HQL, we shouldn't repeat the
> language symbol as a suffix of each resource.
> I would prefer option two, but definitely open to more comments.
> 
> 2) How to persist function language in backend catalog? as a k-v pair in
> properties map, or a dedicate field?
> Even though language type is also a property, I think a separate field in
> CatalogFunction is a more clean solution.
> 
> 3) do we really need to allow users set a properties map for udf? what needs
> to be stored there? what are they used for?
> 
> I am considering a type of use case that use UDFS for realtime inference.
> The model is nested in the udf as a resource. But there are
> multiple parameters are customizable. In this way, user can use properties
> to define those parameters.
> 
> I only have answers to these questions. For questions about the catalog
> implementation, I hope we can collect more feedback from the community.
> 
> 
> Best Regards
> Peter Huang
> 
> 
> 
> 
> 
> Best Regards
> Peter Huang
> 
> On Tue, Oct 29, 2019 at 11:31 AM Bowen Li  wrote:
> 
>> Hi all,
>> 
>> Besides all the good questions raised above, we seem all agree to have a
>> MVP for Flink 1.10, "to support users to create and persist a java
>> class-based udf that's already in classpath (no extra resource loading),
>> and use it later in queries".
>> 
>> IIUIC, to achieve that in 1.10, the following are currently the core
>> issues/blockers we should figure out, and solve them as our **highest
>> priority**:
>> 
>> - what's the syntax to distinguish function language (java, scala, python,
>> etc)? we only need to implement the java one in 1.10 but have to settle
>> down the long term solution
>> - how to persist function language in backend catalog? as a k-v pair in
>> properties map, or a dedicate field?
>> - do we really need to allow users set a properties map for udf? what needs
>> to be stored there? what are they used for?
>> - should a catalog impl be able to decide whether it can take a properties
>> map (if we decide to have one), and which language of a udf it can persist?
>>   - E.g. Hive metastore, which backs Flink's HiveCatalog, cannot take a
>> properties map and is only able to persist java udf [1], unless we do
>> something hacky to it
>> 
>> I feel these questions are essential to Flink functions in the long run,
>> but most importantly, are also the minimum scope for Flink 1.10. Aspects
>> like resource loading security or compatibility with Hive syntax are
>> important too, however if we focus on them now, we may not be able to get
>> the MVP out in time.
>> 
>> [1]
>> -
>> 
>> https://hive.apache.org/javadocs/r3.1.2/api/org/apache/hadoop/hive/metastore/api/Function.html
>> -
>> 
>> https://hive.apache.org/javadocs/r3.1.2/api/org/apache/hadoop/hive/metastore/api/FunctionType.html
>> 
>> 
>> 
>> On Sun, Oct 27, 2019 at 8:22 PM Peter Huang 
>> wrote:
>> 
>>> Hi Timo,
>>> 
>>> Thanks for the feedback. I replied and adjust the design accordingly. For
>>> the concern of class loading.
>>> I think we need to distinguish the function class loading for Temporary
>> and

Re: [DISCUSS] FLIP 69 - Flink SQL DDL Enhancement

2019-10-30 Thread Terry Wang
Hi, everyone~

Sorry so late to reply this this thread again. 
I am working on FLIP-84 recently  to make sql api support return value which 
this flip depends on.
I think it’s time to pick up this discussion again and there are some new 
updates in this flip design:
https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
 
<https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>

1. Remove function ddl section. For Peter Huang are working on FLIP-79 to 
convege function ddl support in a more overall way
2. Remove the proposed TableEnvironment SQL API Changes which is covered in 
FLIP-84
3. Update the design doc accoding to review comments.

Looking forward to receiving more comments ~

Best,
Terry Wang



> 2019年9月26日 00:44,Xuefu Z  写道:
> 
> Actually catalogs are more of system settings than of user objects that a
> user might create or drop constantly. Thus, it's probably sufficient to set
> up catalog information in the config file, at least for now.
> 
> Thanks,
> Xuefu
> 
> On Tue, Sep 24, 2019 at 7:10 PM Terry Wang  <mailto:zjuwa...@gmail.com>> wrote:
> 
>> Thanks Bowen for your insightful comments, I’ll think twice and do
>> corresponding improvement.
>> After finished, I’ll update in this mailing thread again.
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 在 2019年9月25日,上午8:28,Bowen Li  写道:
>>> 
>>> BTW, will there be a "CREATE/DROP CATALOG" DDL?
>>> 
>>> Though it's not SQL standard, I can see it'll be useful and handy for
>> our end users in many cases.
>>> 
>>> On Mon, Sep 23, 2019 at 12:28 PM Bowen Li >> <mailto:bowenl...@gmail.com> > bowenl...@gmail.com <mailto:bowenl...@gmail.com>>> wrote:
>>> Hi Terry,
>>> 
>>> Thanks for driving the effort! I left some comments in the doc.
>>> 
>>> AFAIU, the biggest motivation is to support DDLs in sql parser so that
>> both Table API and SQL CLI can share the stack, despite that SQL CLI has
>> already supported some commands itself. However, I don't see details on how
>> SQL CLI would migrate and depend on sql parser, and how Table API and SQL
>> CLI would actually share SQL parser. I'm not sure yet how much work that
>> will take, just want to double check that you didn't include them because
>> they are very trivial according to your estimate?
>>> 
>>> 
>>> On Mon, Sep 16, 2019 at 1:46 AM Terry Wang >> <mailto:zjuwa...@gmail.com> > zjuwa...@gmail.com <mailto:zjuwa...@gmail.com>>> wrote:
>>> Hi everyone,
>>> 
>>> In flink 1.9, we have introduced some awesome features such as complete
>> catalog support[1] and sql ddl support[2]. These features have been a
>> critical integration for Flink to be able to manage data and metadata like
>> a classic RDBMS and make developers more easy to construct their
>> real-time/off-line warehouse or sth similar base on flink.
>>> 
>>> But there is still a lack of support on how Flink SQL DDL to manage
>> metadata and data like classic RDBMS such as `alter table rename` and so on.
>>> 
>>> So I’d like to kick off a discussion on enhancing Flink Sql Ddls:
>>> 
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>> <
>> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>>  
>> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>>>> 
>>> 
>>> In short, it:
>>>- Add Catalog DDL enhancement support:  show catalogs / describe
>> catalog / use catalog
>>>- Add Database DDL enhancement support:  show databses / create
>> database / drop database/ alter database
>>>- Add Table DDL enhancement support:show tables/ describe
>> table / alter table
>>>- Add Function DDL enhancement support: show fu

[DISCUSS] FLIP-84: Improve & Refactor API of Table Module

2019-10-30 Thread Terry Wang
Hi everyone,

TableEnvironment has provided two `Table sqlQuery(String sql)` and `void 
sqlUpdate(String sql)` interfaces to create a table(actually a view here) or 
describe an update action from one sql string. 
But with more use cases come up, there are some fatal shortcomings in current 
API design. Such as  `sqlUpdate()` don’t support get a return value and buggy 
support for buffer sql exception and so on.

So I’d like to kick off a discussion on improvement and refactor the api of 
table module:

google doc: 
https://docs.google.com/document/d/19-mdYJjKirh5aXCwq1fDajSaI09BJMMT95wy_YhtuZk/edit?usp=sharing
 
<https://docs.google.com/document/d/19-mdYJjKirh5aXCwq1fDajSaI09BJMMT95wy_YhtuZk/edit?usp=sharing>
Flip link: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878 
<https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878>

In short, it:
- Discuss buffering sql execute problem
- Discuss current `sqlQuery/sqlUpdate` and propose two new api 
- Introduce one new `executeBatch` method to support batch sql execute
- Discuss how SQL CLI should deal with multiple statements 

Looking forward to all your guys comments.

Best,
Terry Wang





Re: [ANNOUNCE] Becket Qin joins the Flink PMC

2019-10-28 Thread Terry Wang
Congratulations, Becket!

Best,
Terry Wang



> 2019年10月29日 10:12,OpenInx  写道:
> 
> Congratulations Becket!
> 
> On Tue, Oct 29, 2019 at 10:06 AM Zili Chen  wrote:
> 
>> Congratulations Becket!
>> 
>> Best,
>> tison.
>> 
>> 
>> Congxian Qiu  于2019年10月29日周二 上午9:53写道:
>> 
>>> Congratulations Becket!
>>> 
>>> Best,
>>> Congxian
>>> 
>>> 
>>> Wei Zhong  于2019年10月29日周二 上午9:42写道:
>>> 
>>>> Congratulations Becket!
>>>> 
>>>> Best,
>>>> Wei
>>>> 
>>>>> 在 2019年10月29日,09:36,Paul Lam  写道:
>>>>> 
>>>>> Congrats Becket!
>>>>> 
>>>>> Best,
>>>>> Paul Lam
>>>>> 
>>>>>> 在 2019年10月29日,02:18,Xingcan Cui  写道:
>>>>>> 
>>>>>> Congratulations, Becket!
>>>>>> 
>>>>>> Best,
>>>>>> Xingcan
>>>>>> 
>>>>>>> On Oct 28, 2019, at 1:23 PM, Xuefu Z  wrote:
>>>>>>> 
>>>>>>> Congratulations, Becket!
>>>>>>> 
>>>>>>> On Mon, Oct 28, 2019 at 10:08 AM Zhu Zhu 
>> wrote:
>>>>>>> 
>>>>>>>> Congratulations Becket!
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Zhu Zhu
>>>>>>>> 
>>>>>>>> Peter Huang  于2019年10月29日周二 上午1:01写道:
>>>>>>>> 
>>>>>>>>> Congratulations Becket Qin!
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Best Regards
>>>>>>>>> Peter Huang
>>>>>>>>> 
>>>>>>>>> On Mon, Oct 28, 2019 at 9:19 AM Rong Rong 
>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> Congratulations Becket!!
>>>>>>>>>> 
>>>>>>>>>> --
>>>>>>>>>> Rong
>>>>>>>>>> 
>>>>>>>>>> On Mon, Oct 28, 2019, 7:53 AM Jark Wu  wrote:
>>>>>>>>>> 
>>>>>>>>>>> Congratulations Becket!
>>>>>>>>>>> 
>>>>>>>>>>> Best,
>>>>>>>>>>> Jark
>>>>>>>>>>> 
>>>>>>>>>>> On Mon, 28 Oct 2019 at 20:26, Benchao Li 
>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> Congratulations Becket.
>>>>>>>>>>>> 
>>>>>>>>>>>> Dian Fu  于2019年10月28日周一 下午7:22写道:
>>>>>>>>>>>> 
>>>>>>>>>>>>> Congrats, Becket.
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> 在 2019年10月28日,下午6:07,Fabian Hueske  写道:
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Hi everyone,
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> I'm happy to announce that Becket Qin has joined the Flink
>>> PMC.
>>>>>>>>>>>>>> Let's congratulate and welcome Becket as a new member of the
>>>>>>>>> Flink
>>>>>>>>>>> PMC!
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Cheers,
>>>>>>>>>>>>>> Fabian
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>>>> --
>>>>>>>>>>>> 
>>>>>>>>>>>> Benchao Li
>>>>>>>>>>>> School of Electronics Engineering and Computer Science, Peking
>>>>>>>>>> University
>>>>>>>>>>>> Tel:+86-15650713730
>>>>>>>>>>>> Email: libenc...@gmail.com; libenc...@pku.edu.cn
>>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> --
>>>>>>> Xuefu Zhang
>>>>>>> 
>>>>>>> "In Honey We Trust!"
>>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 



Re: [VOTE] FLIP-70: Flink SQL Computed Column Design

2019-10-28 Thread Terry Wang
+1 (non-binding)

Best,
Terry Wang



> 2019年10月28日 15:57,Jingsong Li  写道:
> 
> +1 (non-binding)
> 
> Best,
> Jingsong Lee
> 
> On Mon, Oct 28, 2019 at 2:48 PM Jark Wu  wrote:
> 
>> Thanks for driving this Danny,
>> 
>> +1 (binding)
>> 
>> Best,
>> Jark
>> 
>> 
>> On Mon, 28 Oct 2019 at 14:26, Danny Chan  wrote:
>> 
>>> Hi all,
>>> 
>>> I would like to start the vote for FLIP-70[1] which is discussed and
>>> reached consensus in the discussion thread[2].
>>> 
>>> The vote will be open for at least 72 hours. I'll try to close it by
>>> 2019-10-31 18:00 UTC, unless there is an objection or not enough votes.
>>> 
>>> [1]
>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-70%3A+Flink+SQL+Computed+Column+Design
>>> [2]
>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-70-Support-Computed-Column-for-Flink-SQL-td33126.html
>>> 
>>> Best,
>>> Danny Chan
>>> 
>> 
> 
> 
> -- 
> Best, Jingsong Lee



Re: [Discussion] FLIP-79 Flink Function DDL Support

2019-10-23 Thread Terry Wang
Hi Peter,

Sorry late to reply. Thanks for your efforts on this and I just looked through 
your design.
I left some comments in the doc about alter function section and  function 
catalog interface. 
IMO, the overall design is ok and we can discuss further more about some 
details.
I also think it’s necessary to have this awesome feature limit to basic 
function (of course better to have all :) ) in 1.10 release.

Best,
Terry Wang



> 2019年10月16日 14:19,Peter Huang  写道:
> 
> Hi Xuefu,
> 
> Thank you for the feedback. I think you are pointing out a similar concern
> with Bowen. Let me describe
> how the catalog function and function factory will be changed in the
> implementation section.
> Then, we can have more discussion in detail.
> 
> 
> Best Regards
> Peter Huang
> 
> On Tue, Oct 15, 2019 at 4:18 PM Xuefu Z  wrote:
> 
>> Thanks to Peter for the proposal!
>> 
>> I left some comments in the google doc. Besides what Bowen pointed out, I'm
>> unclear about how things  work end to end from the document. For instance,
>> SQL DDL-like function definition is mentioned. I guess just having a DDL
>> for it doesn't explain how it's supported functionally. I think it's better
>> to have some clarification on what is expected work and what's for the
>> future.
>> 
>> Thanks,
>> Xuefu
>> 
>> 
>> On Tue, Oct 15, 2019 at 11:05 AM Bowen Li  wrote:
>> 
>>> Hi Zhenqiu,
>>> 
>>> Thanks for taking on this effort!
>>> 
>>> A couple questions:
>>> - Though this FLIP is about function DDL, can we also think about how the
>>> created functions can be mapped to CatalogFunction and see if we need to
>>> modify CatalogFunction interface? Syntax changes need to be backed by the
>>> backend.
>>> - Can we define a clearer, smaller scope targeting for Flink 1.10 among
>> all
>>> the proposed changes? The current overall scope seems to be quite wide,
>> and
>>> it may be unrealistic to get everything in a single release, or even a
>>> couple. However, I believe the most common user story can be something as
>>> simple as "being able to create and persist a java class-based udf and
>> use
>>> it later in queries", which will add great value for most Flink users and
>>> is achievable in 1.10.
>>> 
>>> Bowen
>>> 
>>> On Sun, Oct 13, 2019 at 10:46 PM Peter Huang >> 
>>> wrote:
>>> 
>>>> Dear Community,
>>>> 
>>>> FLIP-79 Flink Function DDL Support
>>>> <
>>>> 
>>> 
>> https://docs.google.com/document/d/16kkHlis80s61ifnIahCj-0IEdy5NJ1z-vGEJd_JuLog/edit#
>>>>> 
>>>> 
>>>> This proposal aims to support function DDL with the consideration of
>> SQL
>>>> syntax, language compliance, and advanced external UDF lib
>> registration.
>>>> The Flink DDL is initialized and discussed in the design
>>>> <
>>>> 
>>> 
>> https://docs.google.com/document/d/1TTP-GCC8wSsibJaSUyFZ_5NBAHYEB1FVmPpP7RgDGBA/edit#heading=h.wpsqidkaaoil
>>>>> 
>>>> [1] by Shuyi Chen and Timo. As the initial discussion mainly focused on
>>> the
>>>> table, type and view. FLIP-69 [2] extend it with a more detailed
>>> discussion
>>>> of DDL for catalog, database, and function. Original the function DDL
>> was
>>>> under the scope of FLIP-69. After some discussion
>>>> <https://issues.apache.org/jira/browse/FLINK-7151> with the community,
>>> we
>>>> found that there are several ongoing efforts, such as FLIP-64 [3],
>>> FLIP-65
>>>> [4], and FLIP-78 [5]. As they will directly impact the SQL syntax of
>>>> function DDL, the proposal wants to describe the problem clearly with
>> the
>>>> consideration of existing works and make sure the design aligns with
>>>> efforts of API change of temporary objects and type inference for UDF
>>>> defined by different languages.
>>>> 
>>>> The FlLIP outlines the requirements from related works, and propose a
>> SQL
>>>> syntax to meet those requirements. The corresponding implementation is
>>> also
>>>> discussed. Please kindly review and give feedback.
>>>> 
>>>> 
>>>> Best Regards
>>>> Peter Huang
>>>> 
>>> 
>> 
>> 
>> --
>> Xuefu Zhang
>> 
>> "In Honey We Trust!"
>> 



Re: [VOTE] Accept Stateful Functions into Apache Flink

2019-10-23 Thread Terry Wang
+1 (non-biding)

Best,
Terry Wang



> 2019年10月24日 10:31,Jingsong Li  写道:
> 
> +1 (non-binding)
> 
> Best,
> Jingsong Lee
> 
> On Wed, Oct 23, 2019 at 9:02 PM Yu Li  wrote:
> 
>> +1 (non-binding)
>> 
>> Best Regards,
>> Yu
>> 
>> 
>> On Wed, 23 Oct 2019 at 16:56, Haibo Sun  wrote:
>> 
>>> +1 (non-binding)Best,
>>> Haibo
>>> 
>>> 
>>> At 2019-10-23 09:07:41, "Becket Qin"  wrote:
>>>> +1 (binding)
>>>> 
>>>> Thanks,
>>>> 
>>>> Jiangjie (Becket) Qin
>>>> 
>>>> On Tue, Oct 22, 2019 at 11:44 PM Tzu-Li (Gordon) Tai <
>> tzuli...@apache.org
>>>> 
>>>> wrote:
>>>> 
>>>>> +1 (binding)
>>>>> 
>>>>> Gordon
>>>>> 
>>>>> On Tue, Oct 22, 2019, 10:58 PM Zhijiang >>>> .invalid>
>>>>> wrote:
>>>>> 
>>>>>> +1 (non-binding)
>>>>>> 
>>>>>> Best,
>>>>>> Zhijiang
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> From:Zhu Zhu 
>>>>>> Send Time:2019 Oct. 22 (Tue.) 16:33
>>>>>> To:dev 
>>>>>> Subject:Re: [VOTE] Accept Stateful Functions into Apache Flink
>>>>>> 
>>>>>> +1 (non-binding)
>>>>>> 
>>>>>> Thanks,
>>>>>> Zhu Zhu
>>>>>> 
>>>>>> Biao Liu  于2019年10月22日周二 上午11:06写道:
>>>>>> 
>>>>>>> +1 (non-binding)
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Biao /'bɪ.aʊ/
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> On Tue, 22 Oct 2019 at 10:26, Jark Wu  wrote:
>>>>>>> 
>>>>>>>> +1 (non-binding)
>>>>>>>> 
>>>>>>>> Best,
>>>>>>>> Jark
>>>>>>>> 
>>>>>>>> On Tue, 22 Oct 2019 at 09:38, Hequn Cheng >> 
>>>>>> wrote:
>>>>>>>> 
>>>>>>>>> +1 (non-binding)
>>>>>>>>> 
>>>>>>>>> Best, Hequn
>>>>>>>>> 
>>>>>>>>> On Tue, Oct 22, 2019 at 9:21 AM Dian Fu <
>> dian0511...@gmail.com>
>>>>>> wrote:
>>>>>>>>> 
>>>>>>>>>> +1 (non-binding)
>>>>>>>>>> 
>>>>>>>>>> Regards,
>>>>>>>>>> Dian
>>>>>>>>>> 
>>>>>>>>>>> 在 2019年10月22日,上午9:10,Kurt Young  写道:
>>>>>>>>>>> 
>>>>>>>>>>> +1 (binding)
>>>>>>>>>>> 
>>>>>>>>>>> Best,
>>>>>>>>>>> Kurt
>>>>>>>>>>> 
>>>>>>>>>>> 
>>>>>>>>>>> On Tue, Oct 22, 2019 at 12:56 AM Fabian Hueske <
>>>>>> fhue...@gmail.com>
>>>>>>>>>> wrote:
>>>>>>>>>>> 
>>>>>>>>>>>> +1 (binding)
>>>>>>>>>>>> 
>>>>>>>>>>>> Am Mo., 21. Okt. 2019 um 16:18 Uhr schrieb Thomas Weise <
>>>>>>>>> t...@apache.org
>>>>>>>>>>> :
>>>>>>>>>>>> 
>>>>>>>>>>>>> +1 (binding)
>>>>>>>>>>>>> 
>>>>>>>>>>>>> 
>>>>>>>>>>>>> On Mon, Oct 21, 2019 at 7:10 AM Timo Walther <
>>>>>> twal...@apache.org
>>>>>>>> 
>>>>>>>>>> wrote:
>>>>>>>>>>>>> 
>>>>>>>>>>>>>> +1 (binding)
>>>>>>>>>>>>>> 
>>>>>>>>>>>>>> Thanks,
>>>>>>>>>>>>>> Timo
>>>>>>>>>>>>>> 
>>>>>>>>>

Re: [DISCUSS] Rename the SQL ANY type to OPAQUE type

2019-10-21 Thread Terry Wang
“OPAQUE” seems a little strange to me.
+ 1 for ‘RAW’.

Best,
Terry Wang



> 2019年10月22日 09:19,Kurt Young  写道:
> 
> +1 to RAW, if there's no better candidate comes up.
> 
> Best,
> Kurt
> 
> 
> On Mon, Oct 21, 2019 at 9:25 PM Timo Walther  wrote:
> 
>> I would also avoid `UNKNOWN` because of the mentioned reasons.
>> 
>> I'm fine with `RAW`. I will wait another day or two until I conclude the
>> discussion.
>> 
>> Thanks,
>> Timo
>> 
>> 
>> On 21.10.19 12:23, Jark Wu wrote:
>>> I also think `UNKNOWN` is not suitable here.
>>> Because we already have `UNKNOWN` value in SQL, i.e. the three-valued
>> logic
>>> (TRUE, FALSE, UNKNOWN) of BOOLEAN type.
>>> It will confuse users here, what's the relationship between them.
>>> 
>>> Best,
>>> Jark
>>> 
>>> On Mon, 21 Oct 2019 at 17:53, Paul Lam  wrote:
>>> 
>>>> Hi,
>>>> 
>>>> IMHO, `UNKNOWN` does not fully reflects the situation here, because the
>>>> types are
>>>> actually “known” to users, and users just want to leave them out of
>> Flink
>>>> type system.
>>>> 
>>>> +1 for `RAW`, for it's more intuitive than `OPAQUE`.
>>>> 
>>>> Best,
>>>> Paul Lam
>>>> 
>>>>> 在 2019年10月21日,16:43,Kurt Young  写道:
>>>>> 
>>>>> OPAQUE seems to be a little bit advanced to a lot non-english
>>>>> speakers (including me). I think Xuefu raised a good alternative:
>>>>> UNKNOWN. What do you think about it?
>>>>> 
>>>>> Best,
>>>>> Kurt
>>>>> 
>>>>> 
>>>>> On Mon, Oct 21, 2019 at 3:49 PM Aljoscha Krettek 
>>>>> wrote:
>>>>> 
>>>>>> I prefer OPAQUE compared to ANY because any is often the root object
>> in
>>>> an
>>>>>> object hierarchy and would indicate to users the wrong thing.
>>>>>> 
>>>>>> Aljoscha
>>>>>> 
>>>>>>> On 18. Oct 2019, at 18:41, Xuefu Z  wrote:
>>>>>>> 
>>>>>>> Thanks to Timo for bringing up an interesting topic.
>>>>>>> 
>>>>>>> Personally, "OPAQUE" doesn't seem very intuitive with respect to
>> types.
>>>>>> (It
>>>>>>> suits pretty well to glasses, thought. :)) Anyway, could we just use
>>>>>>> "UNKNOWN", which is more explicit and true reflects its nature?
>>>>>>> 
>>>>>>> Thanks,
>>>>>>> Xuefu
>>>>>>> 
>>>>>>> 
>>>>>>> On Fri, Oct 18, 2019 at 7:51 AM Timo Walther 
>>>> wrote:
>>>>>>>> Hi everyone,
>>>>>>>> 
>>>>>>>> Stephan pointed out that our naming of a generic/blackbox/opaque
>> type
>>>> in
>>>>>>>> SQL might be not intuitive for users. As the term ANY rather
>>>> describes a
>>>>>>>> "super-class of all types" which is not the case in our type system.
>>>> Our
>>>>>>>> current ANY type stands for a type that is just a blackbox within
>> SQL,
>>>>>>>> serialized by some custom serializer, that can only be modified
>> within
>>>>>>>> UDFs.
>>>>>>>> 
>>>>>>>> I also gathered feedback from a training instructor and native
>> English
>>>>>>>> speaker (David in CC) where I received the following:
>>>>>>>> 
>>>>>>>> "The way I’m thinking about this is this: there’s a concept here
>> that
>>>>>>>> people have to become aware of, which is that Flink SQL is able to
>>>>>>>> operate generically on opaquely typed things — and folks need to be
>>>> able
>>>>>>>> to connect what they see in code examples, etc. with this concept
>>>> (which
>>>>>>>> they may be unaware of initially).
>>>>>>>> I feel like ANY misses the mark a little bit, but isn’t particularly
>>>>>>>> bad. I do worry that it may cause some confusion about its purpose
>> and
>>>>>>>> power. I think OPAQUE would more clearly express what’s going on."
>>>>>>>> 
>>>>>>>> Also resources like Wikipedia [1] show that this terminology is
>>>> common:
>>>>>>>> "a data type whose concrete data structure is not defined [...] its
>>>>>>>> values can only be manipulated by calling subroutines that have
>> access
>>>>>>>> to the missing information"
>>>>>>>> 
>>>>>>>> I would therefore vote for refactoring the type name because it is
>> not
>>>>>>>> used much yet.
>>>>>>>> 
>>>>>>>> Implications are:
>>>>>>>> 
>>>>>>>> - a new parser keyword "OPAQUE" and changed SQL parser
>>>>>>>> 
>>>>>>>> - changes for logical type root, logical type visitors, and their
>>>> usages
>>>>>>>> What do you think?
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> 
>>>>>>>> Timo
>>>>>>>> 
>>>>>>>> [1] https://en.wikipedia.org/wiki/Opaque_data_type
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> --
>>>>>>> Xuefu Zhang
>>>>>>> 
>>>>>>> "In Honey We Trust!"
>>>>>> 
>>>> 
>> 
>> 



Re: How to corretly use checkstyle in IntelliJ IDEA

2019-09-25 Thread Terry Wang
Hi Felipe,

If you must use guava directly, you can modify the config of 
/tools/maven/suppressions.xml like the Cassandra connectors’s config.
As for checkStyle plugin not worked well your dev-environment, you should solve 
the errors line by line according to the error msg .
Hope it helps you~

Best,
Terry Wang



> 在 2019年9月25日,下午4:00,Till Rohrmann  写道:
> 
> Hi Felipe,
> 
> Flink's checkstyle prohibits the direct usage of Guava. Please import the
> shaded Guava version `import
> org.apache.flink.shaded.guava18.com.google.common.base.Strings;`.
> 
> Cheers,
> Till
> 
> On Wed, Sep 25, 2019 at 9:31 AM Felipe Gutierrez <
> felipe.o.gutier...@gmail.com> wrote:
> 
>> Hi,
>> 
>> is there another way to use the checkstyle.xml with IntelliJ IDEA that is
>> different from the official documentation [1]?
>> 
>> I imported flink source code and I developed my own function on the code.
>> After that, I run the check style feature on Intellij IDEA 2019 and it
>> points a lot of errors regarding the checkstyle on the original code.
>> 
>> In my code, for instance, I even cannot use "import
>> com.google.common.base.Strings;" and I don't have a clue how to import it
>> correctly.
>> 
>> [1]
>> 
>> https://ci.apache.org/projects/flink/flink-docs-stable/flinkDev/ide_setup.html#checkstyle-for-java
>> 
>> Thanks,
>> Felipe
>> 
>> *--*
>> *-- Felipe Gutierrez*
>> 
>> *-- skype: felipe.o.gutierrez*
>> *--* *https://felipeogutierrez.blogspot.com
>> <https://felipeogutierrez.blogspot.com>*
>> 



Re: [DISCUSS] FLIP 69 - Flink SQL DDL Enhancement

2019-09-24 Thread Terry Wang
Thanks Bowen for your insightful comments, I’ll think twice and do 
corresponding improvement.
After finished, I’ll update in this mailing thread again.
Best,
Terry Wang



> 在 2019年9月25日,上午8:28,Bowen Li  写道:
> 
> BTW, will there be a "CREATE/DROP CATALOG" DDL?
> 
> Though it's not SQL standard, I can see it'll be useful and handy for our end 
> users in many cases.
> 
> On Mon, Sep 23, 2019 at 12:28 PM Bowen Li  <mailto:bowenl...@gmail.com>> wrote:
> Hi Terry,
> 
> Thanks for driving the effort! I left some comments in the doc.
> 
> AFAIU, the biggest motivation is to support DDLs in sql parser so that both 
> Table API and SQL CLI can share the stack, despite that SQL CLI has already 
> supported some commands itself. However, I don't see details on how SQL CLI 
> would migrate and depend on sql parser, and how Table API and SQL CLI would 
> actually share SQL parser. I'm not sure yet how much work that will take, 
> just want to double check that you didn't include them because they are very 
> trivial according to your estimate?
> 
> 
> On Mon, Sep 16, 2019 at 1:46 AM Terry Wang  <mailto:zjuwa...@gmail.com>> wrote:
> Hi everyone,
> 
> In flink 1.9, we have introduced some awesome features such as complete 
> catalog support[1] and sql ddl support[2]. These features have been a 
> critical integration for Flink to be able to manage data and metadata like a 
> classic RDBMS and make developers more easy to construct their 
> real-time/off-line warehouse or sth similar base on flink.
> 
> But there is still a lack of support on how Flink SQL DDL to manage metadata 
> and data like classic RDBMS such as `alter table rename` and so on.
> 
> So I’d like to kick off a discussion on enhancing Flink Sql Ddls:
> https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>  
> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>
>  
> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
>  
> <https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>>
> 
> In short, it:
> - Add Catalog DDL enhancement support:  show catalogs / describe 
> catalog / use catalog
> - Add Database DDL enhancement support:  show databses / create 
> database / drop database/ alter database 
> - Add Table DDL enhancement support:show tables/ describe table / 
> alter table
> - Add Function DDL enhancement support: show functions/ create 
> function /drop function
> 
> Looking forward to your opinions.
> 
> Best,
> Terry Wang
> 
> 
> 
> [1]:https://issues.apache.org/jira/browse/FLINK-11275 
> <https://issues.apache.org/jira/browse/FLINK-11275> 
> <https://issues.apache.org/jira/browse/FLINK-11275 
> <https://issues.apache.org/jira/browse/FLINK-11275>>
> [2]:https://issues.apache.org/jira/browse/FLINK-1 
> <https://issues.apache.org/jira/browse/FLINK-1> 
> <https://issues.apache.org/jira/browse/FLINK-11275 
> <https://issues.apache.org/jira/browse/FLINK-11275>>0232
>  <https://issues.apache.org/jira/browse/FLINK-11275 
> <https://issues.apache.org/jira/browse/FLINK-11275>>



Re: [VOTE] FLIP-57: Rework FunctionCatalog

2019-09-24 Thread Terry Wang
+1

Best,
Terry Wang



> 在 2019年9月24日,上午10:42,Kurt Young  写道:
> 
> +1
> 
> Best,
> Kurt
> 
> 
> On Tue, Sep 24, 2019 at 2:30 AM Bowen Li  wrote:
> 
>> Hi all,
>> 
>> I'd like to start a voting thread for FLIP-57 [1], which we've reached
>> consensus in [2].
>> 
>> This voting will be open for minimum 3 days till 6:30pm UTC, Sep 26.
>> 
>> Thanks,
>> Bowen
>> 
>> [1]
>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog
>> [2]
>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-57-Rework-FunctionCatalog-td32291.html#a32613
>> 



Re: [VOTE] FLIP-63: Rework table partition support

2019-09-24 Thread Terry Wang
+1, Overall looks good.

Best,
Terry Wang



> 在 2019年9月24日,下午5:02,Kurt Young  写道:
> 
> +1 from my side. Some implementation details could be revisited
> again during code reviewing.
> 
> Best,
> Kurt
> 
> 
> On Tue, Sep 24, 2019 at 3:14 PM Jingsong Li  wrote:
> 
>> Just to clarify:
>> 
>> FLIP wiki:
>> 
>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
>> 
>> 
>> Discussion thread:
>> 
>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-63-Rework-table-partition-support-td32770.html
>> 
>> 
>> Google Doc:
>> 
>> https://docs.google.com/document/d/15R3vZ1R_pAHcvJkRx_CWleXgl08WL3k_ZpnWSdzP7GY/edit?usp=sharing
>> 
>> Best,
>> Jingsong Lee
>> 
>> On Tue, Sep 24, 2019 at 11:43 AM Jingsong Lee 
>> wrote:
>> 
>>> Thank you for your reminder.
>>> Updated.
>>> 
>>> Best,
>>> Jingsong Lee
>>> 
>>> On Tue, Sep 24, 2019 at 11:36 AM Kurt Young  wrote:
>>> 
>>>> Looks like the wiki is not aligned with latest google doc, could
>>>> you update it first?
>>>> 
>>>> Best,
>>>> Kurt
>>>> 
>>>> 
>>>> On Tue, Sep 24, 2019 at 10:19 AM Jingsong Lee 
>>>> wrote:
>>>> 
>>>>> Hi Flink devs, after another round of discussion.
>>>>> 
>>>>> I would like to re-start the voting for FLIP-63
>>>>> Rework table partition support.
>>>>> 
>>>>> FLIP wiki:
>>>>> <
>>>>> 
>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-33%3A+Standardize+Connector+Metrics
>>>>>> 
>>>>> <
>>>>> 
>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-51%3A+Rework+of+the+Expression+Design
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support
>>>>> 
>>>>> Discussion thread:
>>>>> <
>>>>> 
>>>> 
>> https://lists.apache.org/thread.html/65078bad6e047578d502e1e5d92026f13fd9648725f5b74ed330@%3Cdev.flink.apache.org%3E
>>>>>> 
>>>>> <
>>>>> 
>>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-51-Rework-of-the-Expression-Design-td31653.html
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-FLIP-63-Rework-table-partition-support-td32770.html
>>>>> 
>>>>> Google Doc:
>>>>> <
>>>>> 
>>>> 
>> https://docs.google.com/document/d/1yFDyquMo_-VZ59vyhaMshpPtg7p87b9IYdAtMXv5XmM/edit?usp=sharing
>>>>>> 
>>>>> 
>>>>> 
>>>> 
>> https://docs.google.com/document/d/15R3vZ1R_pAHcvJkRx_CWleXgl08WL3k_ZpnWSdzP7GY/edit?usp=sharing
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Best,
>>>>> Jingsong Lee
>>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> Best, Jingsong Lee
>>> 
>> 
>> 
>> --
>> Best, Jingsong Lee
>> 



Re: [DISCUSS] Releasing Flink 1.9.1

2019-09-23 Thread Terry Wang
+1 for the 1.9.1 release and for Jark being the RM.
Thanks Jark for driving on this.

Best,
Terry Wang



> 在 2019年9月24日,下午2:19,Jark Wu  写道:
> 
> Thanks Till for reviewing FLINK-14010.
> 
> Hi Jeff, I think it makes sense to merge FLINK-13708 before the release (PR
> has been reviewed).
> 
> Hi Debasish, FLINK-12501 has already been merged in 1.10.0. I'm fine to
> cherry-pick it to 1.9 if we
> have a consensus this issue could be viewed as a bug. We can continue the
> discussion in the JIRA.
> 
> Best,
> Jark
> 
> 
> On Tue, 24 Sep 2019 at 13:39, Dian Fu  wrote:
> 
>> +1 for 1.9.1 release and Jark being the RM. Thanks Jark for kicking off
>> this release and the volunteering.
>> 
>> Regards,
>> Dian
>> 
>>> 在 2019年9月24日,上午10:45,Kurt Young  写道:
>>> 
>>> +1 for the 1.9.1 release and for Jark being the RM.
>>> Thanks Jark for the volunteering.
>>> 
>>> Best,
>>> Kurt
>>> 
>>> 
>>> On Mon, Sep 23, 2019 at 9:17 PM Till Rohrmann 
>> wrote:
>>> 
>>>> +1 for the 1.9.1 release and for Jark being the RM. I'll help with the
>>>> review of FLINK-14010.
>>>> 
>>>> Cheers,
>>>> Till
>>>> 
>>>> On Mon, Sep 23, 2019 at 8:32 AM Debasish Ghosh <
>> ghosh.debas...@gmail.com>
>>>> wrote:
>>>> 
>>>>> I hope https://issues.apache.org/jira/browse/FLINK-12501 will also be
>>>> part
>>>>> of 1.9.1 ..
>>>>> 
>>>>> regards.
>>>>> 
>>>>> On Mon, Sep 23, 2019 at 11:39 AM Jeff Zhang  wrote:
>>>>> 
>>>>>> FLINK-13708 is also very critical IMO. This would cause invalid flink
>>>> job
>>>>>> (doubled output)
>>>>>> 
>>>>>> https://issues.apache.org/jira/browse/FLINK-13708
>>>>>> 
>>>>>> Jark Wu  于2019年9月23日周一 下午2:03写道:
>>>>>> 
>>>>>>> Hi everyone,
>>>>>>> 
>>>>>>> It has already been a month since we released Flink 1.9.0.
>>>>>>> We already have many important bug fixes from which our users can
>>>>> benefit
>>>>>>> in the release-1.9 branch (83 resolved issues).
>>>>>>> Therefore, I propose to create the next bug fix release for Flink
>>>> 1.9.
>>>>>>> 
>>>>>>> Most notable fixes are:
>>>>>>> 
>>>>>>> - [FLINK-13526] When switching to a non existing catalog or database
>>>> in
>>>>>> the
>>>>>>> SQL Client the client crashes.
>>>>>>> - [FLINK-13568] It is not possible to create a table with a "STRING"
>>>>> data
>>>>>>> type via the SQL DDL.
>>>>>>> - [FLINK-13941] Prevent data-loss by not cleaning up small part files
>>>>>> from
>>>>>>> S3.
>>>>>>> - [FLINK-13490][jdbc] If one column value is null when reading JDBC,
>>>>> the
>>>>>>> following values will all be null.
>>>>>>> - [FLINK-14107][kinesis] When using event time alignment with the
>>>>>> Kinsesis
>>>>>>> Consumer the consumer might deadlock in one corner case.
>>>>>>> 
>>>>>>> Furthermore, I would like the following critical issues to be merged
>>>>>> before
>>>>>>> 1.9.1 release:
>>>>>>> 
>>>>>>> - [FLINK-14118] Reduce the unnecessary flushing when there is no data
>>>>>>> available for flush which can save 20% ~ 40% CPU. (reviewing)
>>>>>>> - [FLINK-13386] Fix A couple of issues with the new dashboard have
>>>>>> already
>>>>>>> been filed. (PR is created, need review)
>>>>>>> - [FLINK-14010][yarn] The Flink YARN cluster can get into an
>>>>> inconsistent
>>>>>>> state in some cases, where
>>>>>>> leaderhship for JobManager, ResourceManager and Dispatcher components
>>>>> is
>>>>>>> split between two master processes. (PR is created, need review)
>>>>>>> 
>>>>>>> I would volunteer as release manager and kick off the release process
>>>>>> once
>>>>>>> blocker issues has been merged. What do you think?
>>>>>>> 
>>>>>>> If there is any other blocker issues need to be fixed in 1.9.1,
>>>> please
>>>>>> let
>>>>>>> me know.
>>>>>>> 
>>>>>>> Cheers,
>>>>>>> Jark
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> --
>>>>>> Best Regards
>>>>>> 
>>>>>> Jeff Zhang
>>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> Debasish Ghosh
>>>>> http://manning.com/ghosh2
>>>>> http://manning.com/ghosh
>>>>> 
>>>>> Twttr: @debasishg
>>>>> Blog: http://debasishg.blogspot.com
>>>>> Code: http://github.com/debasishg
>>>>> 
>>>> 
>> 
>> 



Re: non-deserializable root cause in DeclineCheckpoint

2019-09-22 Thread Terry Wang
Hi Jeffrey,

You are right and I understood what you have  said  after I just studied the 
class org.apache.flink.util.SerializedThrowable.
I prefer the fixes no.2 you mentioned:
CheckpointException should always capture its wrapped exception as a 
SerializedThrowable 
Looking forward to seeing your pr soon :)

Best,
Terry Wang



> 在 2019年9月23日,上午11:48,Jeffrey Martin  写道:
> 
> Hi Terry,
> 
> KafkaException comes in through the job's dependencies (it's defined in the
> kafka-clients jar packed up in the fat job jar) and is on either the TM nor
> JM default classpath. The job running in the TM includes the job
> dependencies and so can throw a KafkaException but the JM can't deserialize
> it because it's not available on the default classpath.
> 
> I'm suggesting defensively wrapping all causes of a CheckpointException in
> a SerializedThrowable (in addition to defensively wrapping everything
> except a CheckpointException). I believe SerializedThrowable is there
> specifically for this case, i.e. where a job in the TM sends the JM an
> exception that's defined only in the job itself.
> 
> It might be clearer if I just put up a PR :) I'd be happy to and it'll be
> very short.
> 
> Best,
> 
> Jeff
> 
> On Sun, Sep 22, 2019 at 7:45 PM Terry Wang  wrote:
> 
>> Hi, Jeffrey~
>> 
>> Thanks for your detailed explanation and I understood why job failed with
>> flink 1.9.
>> 
>> But the two fixes you mentioned may still not work well. As KafkaException
>> can be serialized
>> in TM for there is necessary jar in its classpath but not in JM, so maybe
>> it’s impossible to check
>> the possibility of serialization in advance.
>> Do I understand right?
>> 
>> 
>> 
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 在 2019年9月23日,上午5:17,Jeffrey Martin  写道:
>>> 
>>> Thanks for suggestion, Terry. I've investigated a bit further.
>>> 
>>> DeclineCheckpoint specifically checks for the possibility of an exception
>>> that the JM won't be able to deserialize (i.e. anything other than a
>>> Checkpoint exception). It just doesn't check for the possibility of a
>>> CheckpointException that can't be deserialize because its root cause
>> can't
>>> be deserialize.
>>> 
>>> I think the job succeeding on 1.8 and failing on 1.9 was a red herring --
>>> 1.9 broke the FlinkKafkaProducer API so I wound up having to set the
>>> Semantic explicitly on 1.9. I set it to EXACTLY_ONCE, which caused
>>> checkpoints to fail sometimes. That caused the KafkaException to be
>>> propagated to the JM as the root cause of a CheckpointException.
>>> 
>>> On Sun, Sep 22, 2019 at 5:03 AM Terry Wang  wrote:
>>> 
>>>> Hi, Jeffrey~
>>>> 
>>>> I think two fixes you mentioned may not work in your case.
>>>> This problem https://issues.apache.org/jira/browse/FLINK-14076 <
>>>> https://issues.apache.org/jira/browse/FLINK-14076> is caused by TM and
>> JM
>>>> jar package environment inconsistent or jar loaded behavior
>> inconsistent in
>>>> nature.
>>>> Maybe the behavior  of standalone cluster’s dynamic class loader changed
>>>> in flink 1.9 since you mentioned that your program run normally in flink
>>>> 1.8.
>>>> Just a thought from me.
>>>> Hope to be useful~
>>>> 
>>>> Best,
>>>> Terry Wang
>>>> 
>>>> 
>>>> 
>>>>> 在 2019年9月21日,上午2:58,Jeffrey Martin  写道:
>>>>> 
>>>>> JIRA ticket: https://issues.apache.org/jira/browse/FLINK-14076
>>>>> 
>>>>> I'm on Flink v1.9 with the Kafka connector and a standalone JM.
>>>>> 
>>>>> If FlinkKafkaProducer fails while checkpointing, it throws a
>>>> KafkaException
>>>>> which gets wrapped in a CheckpointException which is sent to the JM as
>> a
>>>>> DeclineCheckpoint. KafkaException isn't on the JM default classpath, so
>>>> the
>>>>> JM throws a fairly cryptic ClassNotFoundException. The details of the
>>>>> KafkaException wind up suppressed so it's impossible to figure out what
>>>>> actually went wrong.
>>>>> 
>>>>> I can think of two fixes that would prevent this from occurring in the
>>>>> Kafka or other connectors in the future:
>>>>> 1. DeclineCheckpoint should always send a SerializedThrowable to the JM
>>>>> rather than allowing CheckpointExceptions with non-deserializable root
>>>>> causes to slip through
>>>>> 2. CheckpointException should always capture its wrapped exception as a
>>>>> SerializedThrowable (i.e., use 'super(new SerializedThrowable(cause))'
>>>>> rather than 'super(cause)').
>>>>> 
>>>>> Thoughts?
>>>> 
>>>> 
>> 
>> 



Re: use of org.apache.flink.yarn.cli.FlinkYarnSessionCli in Flink Sql client

2019-09-22 Thread Terry Wang
Hi Dipanjan:

I just looked through the Flink SQL client code and got the same conclusion as 
you.
Look forward to receiving other comments.

Best,
Terry Wang



> 在 2019年9月22日,下午11:53,Dipanjan Mazumder  写道:
> 
> Hi ,
>   Thanks again for responding on my earlier queries..
> I was again going through the Flink SQL client code and came across the 
> default custom command-line , few days back i came to know that Flink sql 
> client is not supported in a full fledged cluster with different resource 
> managers like yarn but this "org.apache.flink.yarn.cli.FlinkYarnSessionCli" 
> class seems like is used by the SQL client to establish session with yarn 
> managed cluster.
> 
> Am i wrong in thinking this or is there some other use for this class. Please 
> kindly help on the same.
> RegardsDipanjan



Re: non-deserializable root cause in DeclineCheckpoint

2019-09-22 Thread Terry Wang
Hi, Jeffrey~

Thanks for your detailed explanation and I understood why job failed with flink 
1.9.

But the two fixes you mentioned may still not work well. As KafkaException can 
be serialized 
in TM for there is necessary jar in its classpath but not in JM, so maybe it’s 
impossible to check
the possibility of serialization in advance. 
Do I understand right?



Best,
Terry Wang



> 在 2019年9月23日,上午5:17,Jeffrey Martin  写道:
> 
> Thanks for suggestion, Terry. I've investigated a bit further.
> 
> DeclineCheckpoint specifically checks for the possibility of an exception
> that the JM won't be able to deserialize (i.e. anything other than a
> Checkpoint exception). It just doesn't check for the possibility of a
> CheckpointException that can't be deserialize because its root cause can't
> be deserialize.
> 
> I think the job succeeding on 1.8 and failing on 1.9 was a red herring --
> 1.9 broke the FlinkKafkaProducer API so I wound up having to set the
> Semantic explicitly on 1.9. I set it to EXACTLY_ONCE, which caused
> checkpoints to fail sometimes. That caused the KafkaException to be
> propagated to the JM as the root cause of a CheckpointException.
> 
> On Sun, Sep 22, 2019 at 5:03 AM Terry Wang  wrote:
> 
>> Hi, Jeffrey~
>> 
>> I think two fixes you mentioned may not work in your case.
>> This problem https://issues.apache.org/jira/browse/FLINK-14076 <
>> https://issues.apache.org/jira/browse/FLINK-14076> is caused by TM and JM
>> jar package environment inconsistent or jar loaded behavior inconsistent in
>> nature.
>> Maybe the behavior  of standalone cluster’s dynamic class loader changed
>> in flink 1.9 since you mentioned that your program run normally in flink
>> 1.8.
>> Just a thought from me.
>> Hope to be useful~
>> 
>> Best,
>> Terry Wang
>> 
>> 
>> 
>>> 在 2019年9月21日,上午2:58,Jeffrey Martin  写道:
>>> 
>>> JIRA ticket: https://issues.apache.org/jira/browse/FLINK-14076
>>> 
>>> I'm on Flink v1.9 with the Kafka connector and a standalone JM.
>>> 
>>> If FlinkKafkaProducer fails while checkpointing, it throws a
>> KafkaException
>>> which gets wrapped in a CheckpointException which is sent to the JM as a
>>> DeclineCheckpoint. KafkaException isn't on the JM default classpath, so
>> the
>>> JM throws a fairly cryptic ClassNotFoundException. The details of the
>>> KafkaException wind up suppressed so it's impossible to figure out what
>>> actually went wrong.
>>> 
>>> I can think of two fixes that would prevent this from occurring in the
>>> Kafka or other connectors in the future:
>>> 1. DeclineCheckpoint should always send a SerializedThrowable to the JM
>>> rather than allowing CheckpointExceptions with non-deserializable root
>>> causes to slip through
>>> 2. CheckpointException should always capture its wrapped exception as a
>>> SerializedThrowable (i.e., use 'super(new SerializedThrowable(cause))'
>>> rather than 'super(cause)').
>>> 
>>> Thoughts?
>> 
>> 



Re: Best coding practises guide while programming using flink apis

2019-09-22 Thread Terry Wang
Hi, Deepak~

I appreciate your idea and cc to dev mail too. 

Best,
Terry Wang



> 在 2019年9月22日,下午2:12,Deepak Sharma  写道:
> 
> Hi All
> I guess we need to put some examples in the documentation around best coding 
> practises , concurrency , non blocking IO and design patterns while writing 
> Apache Flink pipelines.
> Is there any such guide available ?
> E.g. when and how to use the GOF design patterns . Any code snippet can be 
> put as well explaining it.
> 
> This guide can come from people already running beam in production and 
> written it with all best practices in mind.
> It will help in greater and wider adoption.
> 
> Just a thought.
> Please let me know if anyone wants to contribute and i can lead this 
> initiative by documenting in flink wiki.
> 
> Thanks
> -- 
> Thanks
> Deepak
> www.bigdatabig.com <http://www.bigdatabig.com/>
> www.keosha.net <http://www.keosha.net/>


Re: non-deserializable root cause in DeclineCheckpoint

2019-09-22 Thread Terry Wang
Hi, Jeffrey~

I think two fixes you mentioned may not work in your case. 
This problem https://issues.apache.org/jira/browse/FLINK-14076 
<https://issues.apache.org/jira/browse/FLINK-14076> is caused by TM and JM jar 
package environment inconsistent or jar loaded behavior inconsistent in nature.
Maybe the behavior  of standalone cluster’s dynamic class loader changed in 
flink 1.9 since you mentioned that your program run normally in flink 1.8. 
Just a thought from me.
Hope to be useful~

Best,
Terry Wang



> 在 2019年9月21日,上午2:58,Jeffrey Martin  写道:
> 
> JIRA ticket: https://issues.apache.org/jira/browse/FLINK-14076
> 
> I'm on Flink v1.9 with the Kafka connector and a standalone JM.
> 
> If FlinkKafkaProducer fails while checkpointing, it throws a KafkaException
> which gets wrapped in a CheckpointException which is sent to the JM as a
> DeclineCheckpoint. KafkaException isn't on the JM default classpath, so the
> JM throws a fairly cryptic ClassNotFoundException. The details of the
> KafkaException wind up suppressed so it's impossible to figure out what
> actually went wrong.
> 
> I can think of two fixes that would prevent this from occurring in the
> Kafka or other connectors in the future:
> 1. DeclineCheckpoint should always send a SerializedThrowable to the JM
> rather than allowing CheckpointExceptions with non-deserializable root
> causes to slip through
> 2. CheckpointException should always capture its wrapped exception as a
> SerializedThrowable (i.e., use 'super(new SerializedThrowable(cause))'
> rather than 'super(cause)').
> 
> Thoughts?



Re: Confluence permission for FLIP creation

2019-09-20 Thread Terry Wang
Thanks, Fabian

Best,
Terry Wang



> 在 2019年9月19日,下午7:37,Fabian Hueske  写道:
> 
> Hi Terry,
> 
> I gave you permissions.
> 
> Thanks, Fabian
> 
> Am Do., 19. Sept. 2019 um 04:09 Uhr schrieb Terry Wang :
> 
>> Hi all,
>> 
>> As communicated in an email thread, I'm proposing Flink SQL ddl
>> enhancement. I have a draft design doc that I'd like to convert it to a
>> FLIP. Thus, it would be great if anyone who can grant me the write access
>> to Confluence. My Confluence ID is zjuwangg.
>> 
>> It would be nice if any of you can help on this.
>> 
>> Best,
>> Terry Wang
>> 
>> 
>> 
>> 



Confluence permission for FLIP creation

2019-09-18 Thread Terry Wang
Hi all, 

As communicated in an email thread, I'm proposing Flink SQL ddl enhancement. I 
have a draft design doc that I'd like to convert it to a FLIP. Thus, it would 
be great if anyone who can grant me the write access to Confluence. My 
Confluence ID is zjuwangg.

It would be nice if any of you can help on this.

Best,
Terry Wang





[DISCUSS] FLIP 69 - Flink SQL DDL Enhancement

2019-09-16 Thread Terry Wang
Hi everyone,

In flink 1.9, we have introduced some awesome features such as complete catalog 
support[1] and sql ddl support[2]. These features have been a critical 
integration for Flink to be able to manage data and metadata like a classic 
RDBMS and make developers more easy to construct their real-time/off-line 
warehouse or sth similar base on flink.

But there is still a lack of support on how Flink SQL DDL to manage metadata 
and data like classic RDBMS such as `alter table rename` and so on.

So I’d like to kick off a discussion on enhancing Flink Sql Ddls:
https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing
 
<https://docs.google.com/document/d/1mhZmx1h2ecfL0x8OzYD1n-nVRn4yE7pwk4jGed4k7kc/edit?usp=sharing>

In short, it:
- Add Catalog DDL enhancement support:  show catalogs / describe 
catalog / use catalog
- Add Database DDL enhancement support:  show databses / create 
database / drop database/ alter database 
- Add Table DDL enhancement support:show tables/ describe table / 
alter table
- Add Function DDL enhancement support: show functions/ create function 
/drop function

Looking forward to your opinions.

Best,
Terry Wang



[1]:https://issues.apache.org/jira/browse/FLINK-11275 
<https://issues.apache.org/jira/browse/FLINK-11275>
[2]:https://issues.apache.org/jira/browse/FLINK-1 
<https://issues.apache.org/jira/browse/FLINK-11275>0232
 <https://issues.apache.org/jira/browse/FLINK-11275>

Re: [ANNOUNCE] Zili Chen becomes a Flink committer

2019-09-11 Thread Terry Wang
Congratulations!

Best,
Terry Wang



> 在 2019年9月11日,下午5:28,Dian Fu  写道:
> 
> Congratulations!
> 
>> 在 2019年9月11日,下午5:26,Jeff Zhang mailto:zjf...@gmail.com>> 
>> 写道:
>> 
>> Congratulations Zili Chen! 
>> 
>> Wesley Peng mailto:wes...@thepeng.eu>> 于2019年9月11日周三 
>> 下午5:25写道:
>> Hi
>> 
>> on 2019/9/11 17:22, Till Rohrmann wrote:
>> > I'm very happy to announce that Zili Chen (some of you might also know 
>> > him as Tison Kun) accepted the offer of the Flink PMC to become a 
>> > committer of the Flink project.
>> 
>> Congratulations Zili Chen.
>> 
>> regards.
>> 
>> 
>> -- 
>> Best Regards
>> 
>> Jeff Zhang
> 



Re: [VOTE] FLIP-58: Flink Python User-Defined Function for Table API

2019-08-29 Thread Terry Wang
+1. That would be very helpful.
Best,
Terry Wang



> 在 2019年8月30日,上午10:18,Jark Wu  写道:
> 
> +1
> 
> Thanks for the great work!
> 
> On Fri, 30 Aug 2019 at 10:04, Xingbo Huang  wrote:
> 
>> Hi Dian,
>> 
>> +1,
>> Thanks a lot for driving this.
>> 
>> Best,
>> Xingbo
>>> 在 2019年8月30日,上午9:39,Wei Zhong  写道:
>>> 
>>> Hi Dian,
>>> 
>>> +1 non-binding
>>> Thanks for driving this!
>>> 
>>> Best, Wei
>>> 
>>>> 在 2019年8月29日,09:25,Hequn Cheng  写道:
>>>> 
>>>> Hi Dian,
>>>> 
>>>> +1
>>>> Thanks a lot for driving this.
>>>> 
>>>> Best, Hequn
>>>> 
>>>> On Wed, Aug 28, 2019 at 2:01 PM jincheng sun 
>>>> wrote:
>>>> 
>>>>> Hi Dian,
>>>>> 
>>>>> +1, Thanks for your great job!
>>>>> 
>>>>> Best,
>>>>> Jincheng
>>>>> 
>>>>> Dian Fu  于2019年8月28日周三 上午11:04写道:
>>>>> 
>>>>>> Hi all,
>>>>>> 
>>>>>> I'd like to start a voting thread for FLIP-58 [1] since that we have
>>>>>> reached an agreement on the design in the discussion thread [2],
>>>>>> 
>>>>>> This vote will be open for at least 72 hours. Unless there is an
>>>>>> objection, I will try to close it by Sept 2, 2019 00:00 UTC if we have
>>>>>> received sufficient votes.
>>>>>> 
>>>>>> PS: This doesn't mean that we cannot further improve the design. We
>> can
>>>>>> still discuss the implementation details case by case in the JIRA as
>> long
>>>>>> as it doesn't affect the overall design.
>>>>>> 
>>>>>> [1]
>>>>>> 
>>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-58%3A+Flink+Python+User-Defined+Function+for+Table+API
>>>>>> <
>>>>>> 
>>>>> 
>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-58:+Flink+Python+User-Defined+Function+for+Table+API
>>>>>>> 
>>>>>> [2]
>>>>>> 
>>>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Python-User-Defined-Function-for-Table-API-td31673.html
>>>>>> <
>>>>>> 
>>>>> 
>> http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Flink-Python-User-Defined-Function-for-Table-API-td31673.html
>>>>>>> 
>>>>>> 
>>>>>> Thanks,
>>>>>> Dian
>>>>> 
>>> 
>> 
>> 



[jira] [Created] (FLINK-13896) Scala 2.11 maven compile should target Java 1.8

2019-08-29 Thread Terry Wang (Jira)
Terry Wang created FLINK-13896:
--

 Summary: Scala 2.11 maven compile should target Java 1.8
 Key: FLINK-13896
 URL: https://issues.apache.org/jira/browse/FLINK-13896
 Project: Flink
  Issue Type: Bug
  Components: Build System
Affects Versions: 1.9.0
 Environment: When setting TableEnvironment in scala as follwing:

 
{code:java}
// we can repoduce this problem by put following code in 
// org.apache.flink.table.api.scala.internal.StreamTableEnvironmentImplTest

@Test
def testCreateEnvironment(): Unit = {
 val settings = 
EnvironmentSettings.newInstance().useBlinkPlanner().inBatchMode().build();
 val tEnv = TableEnvironment.create(settings);
}
{code}

Then mvn test would fail with an error message like:

 

error: Static methods in interface require -target:JVM-1.8

 

We can fix this bug by adding:


 
 -target:jvm-1.8
 


 

to scala-maven-plugin 

 

 

 
Reporter: Terry Wang






--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (FLINK-13869) Hive built-in function can not work in blink planner stream mode

2019-08-27 Thread Terry Wang (Jira)
Terry Wang created FLINK-13869:
--

 Summary: Hive built-in function can not work in blink planner 
stream mode
 Key: FLINK-13869
 URL: https://issues.apache.org/jira/browse/FLINK-13869
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Hive, Table SQL / Planner
Affects Versions: 1.9.0
 Environment: method call stack:

!image-2019-08-27-15-37-11-230.png!
Reporter: Terry Wang
 Fix For: 1.10.0
 Attachments: image-2019-08-27-15-36-57-662.png, 
image-2019-08-27-15-37-11-230.png

In flink, specifying the StreamTableEnvironment through the EnvironmentSetting 
using the blink planner, when using the UDF in hive in the table API, the 
following error is reported, the flink planner is used to see the call stack, 
and the flink planner does not call setArgumentTypeAndConstants to initialize 
the null pointer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: CiBot Update

2019-08-26 Thread Terry Wang
Very helpful! Thanks Chesnay!
Best,
Terry Wang



> 在 2019年8月23日,下午11:47,Ethan Li  写道:
> 
> Thank you very much Chesnay! This is helpful
> 
>> On Aug 23, 2019, at 2:58 AM, Chesnay Schepler  wrote:
>> 
>> @Ethan Li The source for the CiBot is available here 
>> <https://github.com/flink-ci/ci-bot/>. The implementation of this command is 
>> tightly connected to how the CiBot works; but conceptually it looks at a PR, 
>> finds the most recent build that ran, and uses the Travis REST API to 
>> restart the build.
>> Additionally, it keeps track of which comments have been processed by 
>> storing the comment ID in the CI report.
>> If you have further questions, feel free to ping me directly.
>> 
>> @Dianfu I agree, we should include it somewhere in either the flinkbot 
>> template or the CI report.
>> 
>> On 23/08/2019 03:35, Dian Fu wrote:
>>> Thanks Chesnay for your great work! A very useful feature!
>>> 
>>> Just one minor suggestion: It will be better if we could add this command 
>>> to the section "Bot commands" in the flinkbot template.
>>> 
>>> Regards,
>>> Dian
>>> 
>>>> 在 2019年8月23日,上午2:06,Ethan Li  写道:
>>>> 
>>>> My question is specifically about implementation of "@flinkbot run travis"
>>>> 
>>>>> On Aug 22, 2019, at 1:06 PM, Ethan Li  wrote:
>>>>> 
>>>>> Hi Chesnay,
>>>>> 
>>>>> This is really nice feature!
>>>>> 
>>>>> Can I ask how is this implemented? Do you have the related Jira/PR/docs 
>>>>> that I can take a look? I’d like to introduce it to another project if 
>>>>> applicable. Thank you very much!
>>>>> 
>>>>> Best,
>>>>> Ethan
>>>>> 
>>>>>> On Aug 22, 2019, at 8:34 AM, Biao Liu >>>>> <mailto:mmyy1...@gmail.com>> wrote:
>>>>>> 
>>>>>> Thanks Chesnay a lot,
>>>>>> 
>>>>>> I love this feature!
>>>>>> 
>>>>>> Thanks,
>>>>>> Biao /'bɪ.aʊ/
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> On Thu, 22 Aug 2019 at 20:55, Hequn Cheng >>>>> <mailto:chenghe...@gmail.com>> wrote:
>>>>>> 
>>>>>>> Cool, thanks Chesnay a lot for the improvement!
>>>>>>> 
>>>>>>> Best, Hequn
>>>>>>> 
>>>>>>> On Thu, Aug 22, 2019 at 5:02 PM Zhu Zhu >>>>>> <mailto:reed...@gmail.com>> wrote:
>>>>>>> 
>>>>>>>> Thanks Chesnay for the CI improvement!
>>>>>>>> It is very helpful.
>>>>>>>> 
>>>>>>>> Thanks,
>>>>>>>> Zhu Zhu
>>>>>>>> 
>>>>>>>> zhijiang >>>>>>> <mailto:wangzhijiang...@aliyun.com.invalid>> 于2019年8月22日周四 下午4:18写道:
>>>>>>>> 
>>>>>>>>> It is really very convenient now. Valuable work, Chesnay!
>>>>>>>>> 
>>>>>>>>> Best,
>>>>>>>>> Zhijiang
>>>>>>>>> --
>>>>>>>>> From:Till Rohrmann >>>>>>>> <mailto:trohrm...@apache.org>>
>>>>>>>>> Send Time:2019年8月22日(星期四) 10:13
>>>>>>>>> To:dev mailto:dev@flink.apache.org>>
>>>>>>>>> Subject:Re: CiBot Update
>>>>>>>>> 
>>>>>>>>> Thanks for the continuous work on the CiBot Chesnay!
>>>>>>>>> 
>>>>>>>>> Cheers,
>>>>>>>>> Till
>>>>>>>>> 
>>>>>>>>> On Thu, Aug 22, 2019 at 9:47 AM Jark Wu >>>>>>>> <mailto:imj...@gmail.com>> wrote:
>>>>>>>>> 
>>>>>>>>>> Great work! Thanks Chesnay!
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> 
>>>>>>>>>> On Thu, 22 Aug 2019 at 15:42, Xintong Song >>>>>>>>> <mailto:tonysong...@gmail.com>>
>>>>>>>>

Re: [ANNOUNCE] Andrey Zagrebin becomes a Flink committer

2019-08-16 Thread Terry Wang
Congratulations Andrey!

Best,
Terry Wang



> 在 2019年8月15日,下午9:27,Hequn Cheng  写道:
> 
> Congratulations Andrey!
> 
> On Thu, Aug 15, 2019 at 3:30 PM Fabian Hueske  <mailto:fhue...@gmail.com>> wrote:
> Congrats Andrey!
> 
> Am Do., 15. Aug. 2019 um 07:58 Uhr schrieb Gary Yao  <mailto:g...@ververica.com>>:
> 
> > Congratulations Andrey, well deserved!
> >
> > Best,
> > Gary
> >
> > On Thu, Aug 15, 2019 at 7:50 AM Bowen Li  > <mailto:bowenl...@gmail.com>> wrote:
> >
> > > Congratulations Andrey!
> > >
> > > On Wed, Aug 14, 2019 at 10:18 PM Rong Rong  > > <mailto:walter...@gmail.com>> wrote:
> > >
> > >> Congratulations Andrey!
> > >>
> > >> On Wed, Aug 14, 2019 at 10:14 PM chaojianok  > >> <mailto:chaojia...@163.com>> wrote:
> > >>
> > >> > Congratulations Andrey!
> > >> > At 2019-08-14 21:26:37, "Till Rohrmann"  > >> > <mailto:trohrm...@apache.org>> wrote:
> > >> > >Hi everyone,
> > >> > >
> > >> > >I'm very happy to announce that Andrey Zagrebin accepted the offer of
> > >> the
> > >> > >Flink PMC to become a committer of the Flink project.
> > >> > >
> > >> > >Andrey has been an active community member for more than 15 months.
> > He
> > >> has
> > >> > >helped shaping numerous features such as State TTL, FRocksDB release,
> > >> > >Shuffle service abstraction, FLIP-1, result partition management and
> > >> > >various fixes/improvements. He's also frequently helping out on the
> > >> > >user@f.a.o mailing lists.
> > >> > >
> > >> > >Congratulations Andrey!
> > >> > >
> > >> > >Best, Till
> > >> > >(on behalf of the Flink PMC)
> > >> >
> > >>
> > >
> >



Re: [ANNOUNCE] Hequn becomes a Flink committer

2019-08-07 Thread Terry Wang
Congratulations Hequn, well deserved!

Best,
Terry Wang



> 在 2019年8月7日,下午9:16,Oytun Tez  写道:
> 
> Congratulations Hequn!
> 
> ---
> Oytun Tez
> 
> M O T A W O R D
> The World's Fastest Human Translation Platform.
> oy...@motaword.com <mailto:oy...@motaword.com> — www.motaword.com 
> <http://www.motaword.com/>
> 
> On Wed, Aug 7, 2019 at 9:03 AM Jark Wu  <mailto:imj...@gmail.com>> wrote:
> Congratulations Hequn! It's great to have you in the community!
> 
> 
> 
> On Wed, 7 Aug 2019 at 21:00, Fabian Hueske  <mailto:fhue...@gmail.com>> wrote:
> Congratulations Hequn!
> 
> Am Mi., 7. Aug. 2019 um 14:50 Uhr schrieb Robert Metzger  <mailto:rmetz...@apache.org>>:
> Congratulations!
> 
> On Wed, Aug 7, 2019 at 1:09 PM highfei2...@126.com 
> <mailto:highfei2...@126.com>  <mailto:highfei2...@126.com>>
> wrote:
> 
> > Congrats Hequn!
> >
> > Best,
> > Jeff Yang
> >
> >
> >  Original Message 
> > Subject: Re: [ANNOUNCE] Hequn becomes a Flink committer
> > From: Piotr Nowojski
> > To: JingsongLee
> > CC: Biao Liu ,Zhu Zhu ,Zili Chen ,Jeff Zhang ,Paul Lam ,jincheng sun ,dev 
> > ,user
> >
> >
> > Congratulations :)
> >
> > On 7 Aug 2019, at 12:09, JingsongLee  > <mailto:lzljs3620...@aliyun.com>> wrote:
> >
> > Congrats Hequn!
> >
> > Best,
> > Jingsong Lee
> >
> > --
> > From:Biao Liu mailto:mmyy1...@gmail.com>>
> > Send Time:2019年8月7日(星期三) 12:05
> > To:Zhu Zhu mailto:reed...@gmail.com>>
> > Cc:Zili Chen mailto:wander4...@gmail.com>>; Jeff 
> > Zhang mailto:zjf...@gmail.com>>; Paul
> > Lam mailto:paullin3...@gmail.com>>; jincheng sun 
> > mailto:sunjincheng...@gmail.com>>; dev
> > mailto:dev@flink.apache.org>>; user 
> > mailto:u...@flink.apache.org>>
> > Subject:Re: [ANNOUNCE] Hequn becomes a Flink committer
> >
> > Congrats Hequn!
> >
> > Thanks,
> > Biao /'bɪ.aʊ/
> >
> >
> >
> > On Wed, Aug 7, 2019 at 6:00 PM Zhu Zhu  > <mailto:reed...@gmail.com>> wrote:
> > Congratulations to Hequn!
> >
> > Thanks,
> > Zhu Zhu
> >
> > Zili Chen mailto:wander4...@gmail.com>> 于2019年8月7日周三 
> > 下午5:16写道:
> > Congrats Hequn!
> >
> > Best,
> > tison.
> >
> >
> > Jeff Zhang mailto:zjf...@gmail.com>> 于2019年8月7日周三 
> > 下午5:14写道:
> > Congrats Hequn!
> >
> > Paul Lam mailto:paullin3...@gmail.com>> 
> > 于2019年8月7日周三 下午5:08写道:
> > Congrats Hequn! Well deserved!
> >
> > Best,
> > Paul Lam
> >
> > 在 2019年8月7日,16:28,jincheng sun  > <mailto:sunjincheng...@gmail.com>> 写道:
> >
> > Hi everyone,
> >
> > I'm very happy to announce that Hequn accepted the offer of the Flink PMC
> > to become a committer of the Flink project.
> >
> > Hequn has been contributing to Flink for many years, mainly working on
> > SQL/Table API features. He's also frequently helping out on the user
> > mailing lists and helping check/vote the release.
> >
> > Congratulations Hequn!
> >
> > Best, Jincheng
> > (on behalf of the Flink PMC)
> >
> >
> >
> > --
> > Best Regards
> >
> > Jeff Zhang
> >
> >
> >



[jira] [Created] (FLINK-13610) Refactor HiveTableSource Test use sql query and remove HiveInputFormatTest

2019-08-07 Thread Terry Wang (JIRA)
Terry Wang created FLINK-13610:
--

 Summary: Refactor HiveTableSource Test use sql query and remove 
HiveInputFormatTest
 Key: FLINK-13610
 URL: https://issues.apache.org/jira/browse/FLINK-13610
 Project: Flink
  Issue Type: Test
  Components: Connectors / Hive
Affects Versions: 1.10.0
Reporter: Terry Wang


Since HiveTableSource is mainly used in sql query and now blink planner support 
run sql query, it's time that we change HiveTableSourceTest using sql way 
instead of table api.

HiveTableInputFormt is tested in HiveTableSourceTest and there exists 
redundancy in code,  this ticket also aims to move some test code from 
HiveInputFormatTest and remove this file.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: flink-connector-hive uses third-party repositories

2019-07-29 Thread Terry Wang
Hi Martijn:
I looked through this problem and found javax.jms:jms:jar:1.1 can be 
founded in  https://mvnrepository.com/artifact/javax.jms/jms 
<https://mvnrepository.com/artifact/javax.jms/jms> .
javax.jms:jms:jar:1.1 is imported by HiveRunner with test scope while another 
org.pentaho:pentaho-aggdesigner-algorithm artifacted is not imported. I upload 
a dependency tree file in the attachments.
Martijn Visser 
<https://issues.apache.org/jira/secure/ViewProfile.jspa?name=MartijnVisser> can 
you provide an environment reproducing compile problem? 

I think we can discuss under https://issues.apache.org/jira/browse/FLINK-13475 
<https://issues.apache.org/jira/browse/FLINK-13475>.

Cheers,
Terry Wang

> 在 2019年7月29日,下午9:09,Till Rohrmann  写道:
> 
> Thanks a lot for raising this issue Martijn. I've pulled in Rui who works
> on the flink-connector-hive module and Kurt.
> 
> @Kurt and @Rui, do you think it's doable to decrease the dependency on
> third party repositories? In any case, I've create a JIRA issue for it [1].
> 
> [1] https://issues.apache.org/jira/browse/FLINK-13475
> 
> Cheers,
> Till
> 
> On Mon, Jul 29, 2019 at 11:26 AM Visser, M.J.H. (Martijn)
>  wrote:
> 
>> Hi all,
>> 
>> While trying to get the latest Flink master compiled within a corporate
>> environment, I've noticed that the flink-connector-hive keeps failing. It
>> appears that it uses dependencies which are not located in Maven Central,
>> but on third-party repositories like Conjar. Examples are the org.pentaho
>> pentaho-aggdesigner-algorithm and the javax.jms:jms:jar:1.1
>> 
>> With a little bit of digging, I can probably get it work within the
>> corporate environment, but it might be interesting to reduce the dependency
>> on third-party maven repositories. Calcite did something similar
>> https://issues.apache.org/jira/browse/CALCITE-605
>> https://jira.apache.org/jira/browse/CALCITE-1474
>> 
>> Best regards,
>> 
>> Martijn
>> 
>> 
>> -
>> ATTENTION:
>> The information in this e-mail is confidential and only meant for the
>> intended recipient. If you are not the intended recipient, don't use or
>> disclose it in any way. Please let the sender know and delete the message
>> immediately.
>> -
>> 



Re: [ANNOUNCE] Kete Young is now part of the Flink PMC

2019-07-23 Thread Terry Wang
Congrats Kurt!
Well deserved!

> 在 2019年7月23日,下午6:07,Biao Liu  写道:
> 
> Congrats Kurt!
> Well deserved!
> 
> Danny Chan  于2019年7月23日周二 下午6:01写道:
> 
>> Congratulations Kurt, Well deserved.
>> 
>> Best,
>> Danny Chan
>> 在 2019年7月23日 +0800 PM5:24,dev@flink.apache.org,写道:
>>> 
>>> Congratulations Kurt, Well deserved.
>> 



Re: [ANNOUNCE] Jiangjie (Becket) Qin has been added as a committer to the Flink project

2019-07-18 Thread Terry Wang
Congratulations Becket!

> 在 2019年7月18日,下午5:09,Dawid Wysakowicz  写道:
> 
> Congratulations Becket! Good to have you onboard!
> 
> On 18/07/2019 10:56, Till Rohrmann wrote:
>> Congrats Becket!
>> 
>> On Thu, Jul 18, 2019 at 10:52 AM Jeff Zhang  wrote:
>> 
>>> Congratulations Becket!
>>> 
>>> Xu Forward  于2019年7月18日周四 下午4:39写道:
>>> 
 Congratulations Becket! Well deserved.
 
 
 Cheers,
 
 forward
 
 Kurt Young  于2019年7月18日周四 下午4:20写道:
 
> Congrats Becket!
> 
> Best,
> Kurt
> 
> 
> On Thu, Jul 18, 2019 at 4:12 PM JingsongLee  .invalid>
> wrote:
> 
>> Congratulations Becket!
>> 
>> Best, Jingsong Lee
>> 
>> 
>> --
>> From:Congxian Qiu 
>> Send Time:2019年7月18日(星期四) 16:09
>> To:dev@flink.apache.org 
>> Subject:Re: [ANNOUNCE] Jiangjie (Becket) Qin has been added as a
> committer
>> to the Flink project
>> 
>> Congratulations Becket! Well deserved.
>> 
>> Best,
>> Congxian
>> 
>> 
>> Jark Wu  于2019年7月18日周四 下午4:03写道:
>> 
>>> Congratulations Becket! Well deserved.
>>> 
>>> Cheers,
>>> Jark
>>> 
>>> On Thu, 18 Jul 2019 at 15:56, Paul Lam 
 wrote:
 Congrats Becket!
 
 Best,
 Paul Lam
 
> 在 2019年7月18日,15:41,Robert Metzger  写道:
> 
> Hi all,
> 
> I'm excited to announce that Jiangjie (Becket) Qin just became
>>> a
>> Flink
> committer!
> 
> Congratulations Becket!
> 
> Best,
> Robert (on behalf of the Flink PMC)
 
>>> 
>>> --
>>> Best Regards
>>> 
>>> Jeff Zhang
>>> 
> 



Re: [ANNOUNCE] Jincheng Sun is now part of the Flink PMC

2019-06-24 Thread Terry Wang
Congratulations Jincheng!

> 在 2019年6月24日,下午11:08,Robert Metzger  写道:
> 
> Hi all,
> 
> On behalf of the Flink PMC, I'm happy to announce that Jincheng Sun is now
> part of the Apache Flink Project Management Committee (PMC).
> 
> Jincheng has been a committer since July 2017. He has been very active on
> Flink's Table API / SQL component, as well as helping with releases.
> 
> Congratulations & Welcome Jincheng!
> 
> Best,
> Robert



Re: [DISCUSS] Deprecate previous Python APIs

2019-06-12 Thread Terry Wang
+1 for deprecation. It’s very reasonable.

> 在 2019年6月12日,下午5:32,Till Rohrmann  写道:
> 
> +1 for deprecation.
> 
> Cheers,
> Till
> 
> On Wed, Jun 12, 2019 at 4:31 AM Hequn Cheng  > wrote:
> +1 on the proposal!
> Maintaining only one Python API is helpful for users and contributors.
> 
> Best, Hequn
> 
> On Wed, Jun 12, 2019 at 9:41 AM Jark Wu  > wrote:
> +1 and looking forward to the new Python API world.
> 
> Best,
> Jark
> 
> On Wed, 12 Jun 2019 at 09:22, Becket Qin  > wrote:
> +1 on deprecating the old Python API in 1.9 release.
> 
> Thanks,
> 
> Jiangjie (Becket) Qin
> 
> On Wed, Jun 12, 2019 at 9:07 AM Dian Fu  > wrote:
> +1 for this proposal.
> 
> Regards,
> Dian
> 
>> 在 2019年6月12日,上午8:24,jincheng sun > > 写道:
>> 
>> big +1 for the proposal.
>> 
>> We will soon complete all the Python API functional development of the 1.9 
>> release, the development of UDFs will be carried out. After the support of 
>> UDFs is completed, it will be very natural to support Datastream API. 
>> 
>> If all of us agree with this proposal, I believe that for the 1.10 release, 
>> it is possible to complete support both UDFs and DataStream API. And we will 
>> do our best to make the 1.10 release that contains the Python DataStream API 
>> support. 
>> 
>> So, great thanks to @Stephan for this proposal!
>> 
>> Best,
>> Jincheng
>> 
>> Zili Chen mailto:wander4...@gmail.com>> 于2019年6月11日周二 
>> 下午10:56写道:
>> +1
>> 
>> Best,
>> tison.
>> 
>> 
>> zhijiang mailto:wangzhijiang...@aliyun.com>> 
>> 于2019年6月11日周二 下午10:52写道:
>> It is reasonable as stephan explained. +1 from my side! 
>> --
>> From:Jeff Zhang mailto:zjf...@gmail.com>>
>> Send Time:2019年6月11日(星期二) 22:11
>> To:Stephan Ewen mailto:se...@apache.org>>
>> Cc:user mailto:u...@flink.apache.org>>; dev 
>> mailto:dev@flink.apache.org>>
>> Subject:Re: [DISCUSS] Deprecate previous Python APIs
>>  
>> +1
>> 
>> Stephan Ewen mailto:se...@apache.org>> 于2019年6月11日周二 
>> 下午9:30写道:
>> 
>> > Hi all!
>> >
>> > I would suggest to deprecating the existing python APIs for DataSet and
>> > DataStream API with the 1.9 release.
>> >
>> > Background is that there is a new Python API under development.
>> > The new Python API is initially against the Table API. Flink 1.9 will
>> > support Table API programs without UDFs, 1.10 is planned to support UDFs.
>> > Future versions would support also the DataStream API.
>> >
>> > In the long term, Flink should have one Python API for DataStream and
>> > Table APIs. We should not maintain multiple different implementations and
>> > confuse users that way.
>> > Given that the existing Python APIs are a bit limited and not under active
>> > development, I would suggest to deprecate them in favor of the new API.
>> >
>> > Best,
>> > Stephan
>> >
>> >
>> 
>> -- 
>> Best Regards
>> 
>> Jeff Zhang
>> 
> 



Re: [ANNOUNCE] Apache Flink-shaded 7.0 released

2019-05-30 Thread Terry Wang
Wow~ Glad to see this!
Thanks Jincheng and Chesnay for your effort!

> 在 2019年5月31日,下午1:53,jincheng sun  写道:
> 
> Hi all,
> 
> The Apache Flink community is very happy to announce the release of Apache
> Flink-shaded 7.0.
> 
> The flink-shaded project contains a number of shaded dependencies for
> Apache Flink.
> 
> Apache Flink® is an open-source stream processing framework for
> distributed, high-performing, always-available, and accurate data streaming
> applications.
> 
> The release is available for download at:
> https://flink.apache.org/downloads.html
> 
> The full release notes are available in Jira:
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12345226&styleName=Html&projectId=12315522
> 
> 
> We would like to thanks to @Chesnay Schepler 's help
> for officially publishing this release and thank all contributors of the
> Apache Flink community who made this release possible!
> 
> Regards,
> Jincheng



Re: [Discuss] FLIP-43: Savepoint Connector

2019-05-29 Thread Terry Wang
Hi Seth,
Big +1 from my side. I like this idea. IMO, it’s better to chose another flip 
name instead of ‘connector’, which is a little confusing.

> 在 2019年5月30日,上午10:37,Paul Lam  写道:
> 
> Hi Seth,
> 
> +1 from my side. 
> 
> I was wondering if we can add a reader method to provide a full view of the 
> states instead of the state of a specific operator? It would be helpful when 
> there is some unrestored states of a previously removed operator in the 
> savepoint.
> 
> Best,
> Paul Lam
> 
>> 在 2019年5月30日,09:55,vino yang  写道:
>> 
>> Hi Seth,
>> 
>> Glad to see this FLIP, big +1 for this feature!
>> 
>> Best,
>> Vino
>> 
>> Seth Wiesman  于2019年5月30日周四 上午7:14写道:
>> 
>>> Hey Everyone!
>>> ​
>>> Gordon and I have been discussing adding a savepoint connector to flink
>>> for reading, writing and modifying savepoints.
>>> ​
>>> This is useful for:
>>> ​
>>>   Analyzing state for interesting patterns
>>>   Troubleshooting or auditing jobs by checking for discrepancies in state
>>>   Bootstrapping state for new applications
>>>   Modifying savepoints such as:
>>>   Changing max parallelism
>>>   Making breaking schema changes
>>>   Correcting invalid state
>>> ​
>>> We are looking forward to your feedback!
>>> ​
>>> This is the FLIP:
>>> ​
>>> 
>>> https://cwiki.apache.org/confluence/display/FLINK/FLIP-43%3A+Savepoint+Connector
>>> 
>>> Seth
>>> 
>>> 
>>> 
> 



  1   2   >