Re: Which Flink engine versions do Connectors support?

2023-10-27 Thread Xianxun Ye
Hi Gordon,

Thanks for your information. That is what I need.

And I have responded to the Kafka connector RC vote mail.


Best regards,
Xianxun

> 2023年10月28日 04:13,Tzu-Li (Gordon) Tai  写道:
> 
> Hi Xianxun,
> 
> You can find the list supported Flink versions for each connector here:
> https://flink.apache.org/downloads/#apache-flink-connectors
> 
> Specifically for the Kafka connector, we're in the process of releasing a new 
> version for the connector that works with Flink 1.18.
> The release candidate vote thread is here if you want to test that out: 
> https://lists.apache.org/thread/35gjflv4j2pp2h9oy5syj2vdfpotg486
> 
> Thanks,
> Gordon
> 
> 
> On Fri, Oct 27, 2023 at 12:57 PM Xianxun Ye  <mailto:yesorno828...@gmail.com>> wrote:
>> 
>> Hello Team, 
>> 
>> After the release of Flink 1.18, I found that most connectors had been 
>> externalized, e.g. Kafka, ES, HBase, JDBC, and pulsar connectors.   But I 
>> didn't find any manual or codes indicating which versions of Flink these 
>> connectors could work. 
>> 
>> 
>> Best regards,
>> Xianxun
>> 



Which Flink engine versions do Connectors support?

2023-10-27 Thread Xianxun Ye

Hello Team, 

After the release of Flink 1.18, I found that most connectors had been 
externalized, e.g. Kafka, ES, HBase, JDBC, and pulsar connectors.   But I 
didn't find any manual or codes indicating which versions of Flink these 
connectors could work. 


Best regards,
Xianxun



Re: Serialize and Parse ResolvedExpression

2023-09-07 Thread Xianxun Ye







Hi, RonI want to parse the function(unix_timestamp, from_unixtime etc.) during the task runtime.






  



Best regards,Xianxun



 


  
 Replied Message 
  
  

  

 From 


liu ron


  
  

 Date 


09/8/2023 10:01

  
  

 To 


 
  
user@flink.apache.org

  

  
  

 Subject 


  Re: Serialize and Parse ResolvedExpression

  

  
  Hi, XianxunDo you mean the unix_timestamp() is parsed to the time when the query is compiled in streaming mode?Best,RonXianxun Ye  于2023年9月7日周四 18:19写道:








Hi Team,I want to Serialize the ResolvedExpression to String or byte[] and transmit it into LookupFunction, and parse it back to ResolvedExpression in the LookupFunction.For my case:Select * from left_stream as s join dim_table for system_time as of s.proc_time as d on s.id=d.id where d.pt > cast(from_unixtime(unix_timestamp()-3*60*60,'-MM-dd-HH') as string)The filter will be pushed down to lookupFunction, and I hope that when a task is failover, the time to be filtered is calculated based on the failover time, not the time when the task is first started. For example: 9.1 10:00:00 launch the task; // the lookupFunction will load the data after 9.1 7:00:00    9.5 10:00:00 the task failover; // hope loading the data after 9.5 7:00:00Is there an approach to make ResolvedExpression serializable?






  

Best regards,Xianxun


 












Serialize and Parse ResolvedExpression

2023-09-06 Thread Xianxun Ye







Hi Team,I want to Serialize the ResolvedExpression to String or byte[] and transmit it into LookupFunction, and parse it back to ResolvedExpression in the LookupFunction.For my case:Select * from left_stream as s join dim_table for system_time as of s.proc_time as d on s.id=d.id where d.pt > cast(from_unixtime(unix_timestamp()-3*60*60,'-MM-dd-HH') as string)The filter will be pushed down to lookupFunction, and I hope that when a task is failover, the time to be filtered is calculated based on the failover time, not the time when the task is first started. For example: 9.1 10:00:00 launch the task; // the lookupFunction will load the data after 9.1 7:00:00    9.5 10:00:00 the task failover; // hope loading the data after 9.5 7:00:00Is there an approach to make ResolvedExpression serializable?






  

Best regards,Xianxun


 






Re: [ANNOUNCE] Flink Table Store Joins Apache Incubator as Apache Paimon(incubating)

2023-03-27 Thread Xianxun Ye







Congratulations!






  

Best regards,Xianxun


 


On 03/27/2023 22:51,Samrat Deb wrote: 


congratulationsBests,SamratOn Mon, Mar 27, 2023 at 7:19 PM Yanfei Lei  wrote: Congratulations! Best Regards, Yanfei ramkrishna vasudevan  于2023年3月27日周一 21:46写道: Congratulations !!! On Mon, Mar 27, 2023 at 2:54 PM Yu Li  wrote: Dear Flinkers, As you may have noticed, we are pleased to announce that Flink Table Store has joined the Apache Incubator as a separate project called Apache Paimon(incubating) [1] [2] [3]. The new project still aims at building a streaming data lake platform for high-speed data ingestion, change data tracking and efficient real-time analytics, with the vision of supporting a larger ecosystem and establishing a vibrant and neutral open source community. We would like to thank everyone for their great support and efforts for the Flink Table Store project, and warmly welcome everyone to join the development and activities of the new project. Apache Flink will continue to be one of the first-class citizens supported by Paimon, and we believe that the Flink and Paimon communities will maintain close cooperation. 亲爱的Flinkers, 正如您可能已经注意到的,我们很高兴地宣布,Flink Table Store 已经正式加入 Apache 孵化器独立孵化 [1] [2] [3]。新项目的名字是 Apache Paimon(incubating),仍致力于打造一个支持高速数据摄入、流式数据订阅和高效实时分析的新一代流式湖仓平台。此外,新项目将支持更加丰富的生态,并建立一个充满活力和中立的开源社区。 在这里我们要感谢大家对 Flink Table Store 项目的大力支持和投入,并热烈欢迎大家加入新项目的开发和社区活动。Apache Flink 将继续作为 Paimon 支持的主力计算引擎之一,我们也相信 Flink 和 Paimon 社区将继续保持密切合作。 Best Regards, Yu (on behalf of the Apache Flink PMC and Apache Paimon PPMC) 致礼, 李钰(谨代表 Apache Flink PMC 和 Apache Paimon PPMC) [1] https://paimon.apache.org/ [2] https://github.com/apache/incubator-paimon [3] https://cwiki.apache.org/confluence/display/INCUBATOR/PaimonProposal