Hybrid Source stop processing files after processing 128 SourceFactories

2022-07-25 Thread Benenson, Michael via user
Hi, folks

I have tried fix FLINK-27479 
for Hybrid Source from https://github.com/apache/flink/pull/20215  in Flink 14.3

It works fine, but Flink stops processing files after processing 128 
SourceFactories. I have run this program a few times, starting without 
savepoint, and each time the program hangs up, after processing 128 
SourceFactories. Program does not crash or terminate, but stop processing files.

My program is like the Hybrid source example: reading multiple files, and then 
reading from Kafka

In my case program reads a few hundred directories from s3, that contains 
snappy files, so for each directory it creates separate 
HybridSource.SourceFactory, and the last one is the SourceFactory for reading 
from Kafka.

Any idea, what could be wring? Is it a known restriction, that there should be 
no more than 128 Source Factories?
I have the program running now, so I could collect any additional info to 
clarify the cause of the problem.

Here are the last few lines from JobManager before program stop processing 
files.

2022/07/26 01:02:35.248 INFO  o.a.f.c.f.s.i.StaticFileSplitEnumerator - No more 
splits available for subtask 0
2022/07/26 01:02:36.249 INFO  c.i.strmprocess.hybrid.ReadS3Hybrid1 - Reading 
input data from path 
s3://idl-kafka-connect-ued-raw-uw2-data-lake-e2e/data/topics/sbseg-qbo-clickstream/d_20220715-0800
 for 2022-07-15T08:00:00Z
2022/07/26 01:02:36.618 INFO  o.a.f.c.b.s.h.HybridSourceSplitEnumerator - 
Starting enumerator for sourceIndex=128
2022/07/26 01:02:36.619 INFO  o.a.f.r.s.c.SourceCoordinator - Source Source: 
hybrid-source received split request from parallel task 1
2022/07/26 01:02:36.619 INFO  o.a.f.r.s.c.SourceCoordinator - Source Source: 
hybrid-source received split request from parallel task 2
2022/07/26 01:02:36.619 INFO  o.a.f.r.s.c.SourceCoordinator - Source Source: 
hybrid-source received split request from parallel task 1




Re:Re: 如何在flink中正确使用外部数据库连接

2022-07-25 Thread lxk
谢谢
我现在使用的是直连的方式,也没有关闭preparedstatement和resultset,但是没有发生过内存泄漏的问题,请问了解这背后的原因吗

















在 2022-07-25 13:53:42,"Lijie Wang"  写道:
>Hi,
>根据我的经验,使用连接池时,至少需要及时关掉 statement/ResultSet,否则查询的结果会一直缓存,会有内存泄漏的问题。
>
>Best,
>Lijie
>
>lxk7...@163.com  于2022年7月23日周六 15:34写道:
>
>>
>> 目前的项目中,需要使用外部数据库进行实时的look up。实时的主流数据量一天在百万级别,能用到的外部数据库有Mysql,Clickhouse
>> 以及Redis缓存。
>> 现在是将数据实时落到Clickhouse,然后Flink实时的去look up
>> clickhouse。(虽然知道Clickhouse并发性能不强,但目前能用的就只有这个了,需要存储千万级别的数据)
>> 测试了两种方式:
>>
>> 1.使用JDBC连接池的方式去查询,Druid连接池以及C3P0连接池都用过,但是程序都是运行一段时间就会报OOM(有可能是使用方式不对)。通过dump日志排查的时候发现连接池的方式会将很多信息保留下来,所以最终没有使用这种方式。同时的话,在flink内部使用连接池的时候也没有显示的关闭连接。只在Close方法中调用了关闭。
>> 2.使用DriverManager获取连接查询。这种方式目前测试下来,程序是稳定运行的,也没有报OOM。同时也没有去关闭连接。
>>
>> 问题:1.如何正确在flink内部使用外部数据库连接,使用数据池的方式,个人理解连接的管理都是由数据池来做的,所以不需要去显示close。同时的话,个人认为实时的程序去查,这个连接就会一直占用着,也无需关闭。简言之,无论是数据池还是直连,需不需要在invoke方法中关闭连接?还是只用在close方法中关闭连接。
>>   2.这种实时的look up除了缓存之外还有没有其他更好的优化手段?或者有什么其他的方案可以替代?
>>
>>
>> lxk7...@163.com
>>


Re: Flink应用高可靠

2022-07-25 Thread Zhanghao Chen
冷备部署的话可以通过一个外围的作业管控服务定期做 savepoint 并拷贝到另一条链路的 HDFS 集群上,故障时从另一条链路重启作业即可。

Best,
Zhanghao Chen

From: andrew <15021959...@163.com>
Sent: Monday, July 25, 2022 10:05:39 PM
To: user-zh 
Subject: Flink应用高可靠

Dear Flink:
  你好! 
现有一个需求,Flink实时计算平台任务对下游用户很重要,不能出问题。单位准备搭建一套灾备大数据实时集群(kakfa/yarn/hdfs)去部署相同的Flink任务,做应用热备或冷备部署!
 下游业务系统没有做双活热备部署! 疑问是:
   1.  主集群故障,切换灾备集群
  涉及有大量带中间状态的数据实时应用一旦主集群出问题,灾备集群如何同步最新状态的数据进行计算
   2.  主集群若恢复,灾备集群切换后的正常任务如何做数据回迁处理


针对上述需求,社区有没有案例可以提供测试验证!谢谢


Why this example does not save anything to file?

2022-07-25 Thread podunk
If I get it correctly this is the way how I can save to CSV:

https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/filesystem/#full-example

 

So my code is (read from file, save to file):

 

 


package flinkCSV;

import org.apache.flink.table.api.EnvironmentSettings;
import org.apache.flink.table.api.TableEnvironment;

public class flinkCSV {

    public static void main(String[] args) throws Exception {

        
        //register and create table
        EnvironmentSettings settings = EnvironmentSettings
            .newInstance()
            //.inStreamingMode()
            .inBatchMode()
            .build();

        final TableEnvironment tEnv = TableEnvironment.create(settings);

        
        tEnv.executeSql("CREATE TABLE Table1 (column_name1 STRING, column_name2 DOUBLE) WITH ('connector.type' = 'filesystem', 'connector.path' = 'file:///C:/temp/test4.txt', 'format.type' = 'csv')");
        
    tEnv.sqlQuery("SELECT COUNT(*) AS Table1_result FROM Table1")
    .execute()
    .print();
        

    tEnv.executeSql("CREATE TABLE fs_table ("
                + "    column_nameA STRING, "
                + "    column_nameB DOUBLE "
                + "    ) WITH ( \n"
                + "    'connector'='filesystem', "
                + "    'path'='file:///C:/temp/test5.txt', "
                + "    'format'='csv', "
                + "  'sink.partition-commit.delay'='1 s', "
                + "  'sink.partition-commit.policy.kind'='success-file'"
                + "    )");
        
    tEnv.executeSql("INSERT INTO fs_table SELECT column_name1, column_name2 from Table1");
        
    tEnv.sqlQuery("SELECT COUNT(*) AS fs_table_result FROM fs_table")
    .execute()
    .print();
        
     }
}

 

Source file (test4.txt) is:

 

aa; 23
bb; 657.9
cc; 55

 

test5.txt is not created, select from fs_table gives null

 



Re: Does Table API connector, csv, has some option to ignore some columns

2022-07-25 Thread podunk
Could this not be as it was with readCsvFile and the "includeFields" option? That would be nice

CSV is just a text file and headers are not required (but can be for human).

 
 

Sent: Tuesday, July 12, 2022 at 2:48 PM
From: "yuxia" 
To: "podunk" 
Cc: "User" 
Subject: Re: Does Table API connector, csv, has some option to ignore some columns



For Json format,  you only need to define the parital columns to be selected in Flink  DDL.

But for csv format, it's not supported. In csv file, if there's no header, how can you mapping the  incomplete columns defined in Flink DDL to the origin fields in the csv file? Thus, you need to write the all columns so that we can do the mapping. If there's a header, we can do the mapping, and it should meet your requirement. However, the current implementation haven't consider such case.

 

 

 

Best regards,
Yuxia
 


发件人: "podunk" 
收件人: "User" 
发送时间: 星期二, 2022年 7 月 12日 下午 5:13:05
主题: Re: Re: Does Table API connector, csv, has some option to ignore some columns
 



This is really surprising. 

When you import data from a file, you really rarely need to import everything from that file. Most often it is several columns. 

So the program that reads the file should be able to do this - this is the ABC of working with data. 

 

Often the suggestion is "you can write your script". Sure. I can. I can write the entire program here - from scratch. 

But I use a ready-made program to avoid writing my scripts.

 
 

Sent: Tuesday, July 12, 2022 at 12:24 AM
From: "Alexander Fedulov" 
To: pod...@gmx.com
Cc: "user" 
Subject: Re: Re: Does Table API connector, csv, has some option to ignore some columns


Hi podunk,
 

no, this is currently not possible: 
> Currently, the CSV schema is derived from table schema. [1]

 

So the Table schema is used to define how Jackson CSV parses the lines and hence needs to be complete.

 

[1] https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/formats/csv/

 

Best,

Alexander Fedulov

 


On Mon, Jul 11, 2022 at 5:43 PM  wrote:





No, I did not mean.

I said 'Does Table API connector, CSV, has some option to ignore some columns in source file?'


 
 

Sent: Monday, July 11, 2022 at 5:28 PM
From: "Xuyang" 
To: pod...@gmx.com
Cc: user@flink.apache.org
Subject: Re:Re: Does Table API connector, csv, has some option to ignore some columns



Hi, did you mean `insert into table1 select col1, col2, col3 ... from table2`?

 

If this doesn't meet your requirement, what about using UDF to custom what you want in runtime.

 


--

    Best!

    Xuyang

 


 

 

在 2022-07-11 16:10:00,pod...@gmx.com 写道:



I want to control what I insert in table not what I get from table.

 
 

Sent: Monday, July 11, 2022 at 3:37 AM
From: "Shengkai Fang" 
To: pod...@gmx.com
Cc: "user" 
Subject: Re: Does Table API connector, csv, has some option to ignore some columns


Hi. 
 

In Flink SQL, you can select the column that you wants in the query. For example, you can use 

 

```

SELECT col_a, col_b FROM some_table;

```

 

Best,

Shengkai

 

 


 于2022年7月9日周六 01:48写道:




Does Table API connector, CSV, has some option to ignore some columns in source file?

For instance read only first, second, nine... but not the others?

 

Or any other trick?



CREATE TABLE some_table (
  some_id BIGINT,
  ...
) WITH (
 'format' = 'csv',
 ...
)


 

 

 


































Questions regarding JobManagerWatermarkTracker on AWS Kinesis

2022-07-25 Thread Peter Schrott
Hi there!

I have a Flink Job (v 1.13.2, AWS managed) which reads from Kinesis (AWS 
manger, 4 shards).

For reasons the shards are not partitioned properly (at the moment). So I 
wanted to make use of Watermarks (BoundedOutOfOrdernessTimestampExtractor) and 
the JobManagerWatermarkTracker to avoid skews in the sources. The job is 
running with parallelism 4.

I added the Tracker as followed:

JobManagerWatermarkTracker watermarkTracker = new 
JobManagerWatermarkTracker("watermark-tracker-" + sourceName);
consumer.setWatermarkTracker(watermarkTracker);

I have implemented a naive map function to track latest consumed event 
timestamp (per parallelism) with the Flink metrics:

public class MetricsMapper extends RichMapFunction {

  private transient Long latestEventTimestamp = 0L;

  @Override
  public void open(Configuration config) {
getRuntimeContext()
  .getMetricGroup()
  .addGroup("kinesisanalytics")
  .addGroup("Function", this.getClass().getName())
  .gauge("latestEventTimestamp", (Gauge) () -> latestEventTimestamp);
  }

  @Override
  public Event map(Event e) throws Exception {
this.latestEventTimestamp = e.getTimestamp(); // this is the same timestamp 
as used in implementation of BoundedOutOfOrdernessTimestampExtractor
return e;
  }
}

Using this metrics I can see that there is a skew of roughly 1 sec among my 
shards. I even tried do reduce ConsumerConfigConstants.WATERMARK_SYNC_MILLI to 
100 ms. But this did not have any impact in the skew of the event timestamp. In 
fact, the monitored skew is the same when not using the watermark tracker.

Am I using the watermark tracker wrong? Or is there even sth wrong with my 
naive monitoring?

Help and suggestions welcome. 

Best,
Peter

Re: 如何获取Job启动时间

2022-07-25 Thread Weihua Hu
Hi,当前的确没有太多的打点和日志,按照我们的经验,需要在代码流程中插入一些日志和打点来辅助做基准测试
Best,
Weihua


On Fri, Jul 22, 2022 at 6:50 PM 邹璨  wrote:

> Hi,
> 有个问题想请教一下~
> 项目需要优化Job启动时间,并做基准测试。查阅资料后发现下面博客中做过类似的测试:
> https://flink.apache.org/2022/01/04/scheduler-performance-part-one.html
> 但不知道里面的时间是如何获取的,不知是否有对应的指标或日志,还是只能人为观测。
>
> 谢谢~
>
> 此电子邮件及其包含的信息将仅发送给上面列出的收件人,必须加以保护,并且可能包含法律或其他原因禁止披露的信息。
> 如果您不是此电子邮件的预期收件人,未经许可,您不得存储、复制、发送、分发或披露它。 禁止存储、复制、发送、分发或披露电子邮件的任何部分。
> 如果此电子邮件发送不正确,请立即联系 NAVER Security(dl_naversecur...@navercorp.com
> )。然后删除所有原件、副本和附件。谢谢您的合作。
> ​
> This email and the information contained in this email are intended solely
> for the recipient(s) addressed above and may contain information that is
> confidential and/or privileged or whose disclosure is prohibited by law or
> other reasons.
> If you are not the intended recipient of this email, please be advised
> that any unauthorized storage, duplication, dissemination, distribution or
> disclosure of all or part of this email is strictly prohibited.
> If you received this email in error, please immediately contact NAVER
> Security (dl_naversecur...@navercorp.com) and delete this email and any
> copies and attachments from your system. Thank you for your cooperation.​
>


Re: flink on yarn 作业挂掉反复重启

2022-07-25 Thread Weihua Hu
可以检查下是不是 JobManager 内存不足被 OOM kill 了,如果有更多的日志也可以贴出来

Best,
Weihua


On Mon, Jul 18, 2022 at 8:41 PM SmileSmile  wrote:

> hi,all
> 遇到这种场景,flink on yarn,并行度3000的场景下,作业包含了多个agg操作,作业recover from checkpoint
> 或者savepoint必现无法恢复的情况,作业反复重启
> jm报错org.apache.flink.runtime.entrypoint.ClusterEntrypoint[] -
> RECEIVED S
> IGNAL 15: SIGTERM. Shutting down as requested.
>
> 请问有什么好的排查思路吗
>
>
>
>
>


Flinksql 计算特征标签自身状态变化

2022-07-25 Thread andrew
Dear Flink:
一需求, FlinkSQL可以用udf实现,捕获标签值的变化。
例如: 若当前用户由低端用户变为中端用户或由中端用户变为高端用户,输出只要用户state状态发生变化,结果用户状态打标为1,反之为0;


有什么好的实现方式没?

Flink应用高可靠

2022-07-25 Thread andrew
Dear Flink:
  你好! 
现有一个需求,Flink实时计算平台任务对下游用户很重要,不能出问题。单位准备搭建一套灾备大数据实时集群(kakfa/yarn/hdfs)去部署相同的Flink任务,做应用热备或冷备部署!
 下游业务系统没有做双活热备部署! 疑问是:
   1.  主集群故障,切换灾备集群
  涉及有大量带中间状态的数据实时应用一旦主集群出问题,灾备集群如何同步最新状态的数据进行计算
   2.  主集群若恢复,灾备集群切换后的正常任务如何做数据回迁处理


针对上述需求,社区有没有案例可以提供测试验证!谢谢

Re: Re: [ANNOUNCE] Apache Flink 1.15.1 released

2022-07-25 Thread podunk
I've added 'taskmanager.resource-id' option to config file and seems it works. Thank you!

 
 

Sent: Tuesday, July 12, 2022 at 12:02 PM
From: "Gabor Somogyi" 
To: pod...@gmx.com
Cc: "user" 
Subject: Re: Re: [ANNOUNCE] Apache Flink 1.15.1 released


In order to provide a hotfix please set "taskmanager.resource-id" to something which doesn't contain special any character.
 

G

 

 


On Tue, Jul 12, 2022 at 11:59 AM Gabor Somogyi  wrote:


Flink tried to create the following dir: tm_localhost:50329-fc0146

Colon is allowed on linux but not on windows and that's the reason of the exception.

 

BR,

G

 

 


On Tue, Jul 12, 2022 at 11:30 AM  wrote:




...

2022-07-12 11:25:08,448 INFO  akka.remote.Remoting [] - Remoting started; listening on addresses :[akka.tcp://flink@localhost:50329]
2022-07-12 11:25:08,658 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils    [] - Actor system started at akka.tcp://flink@localhost:50329
2022-07-12 11:25:08,683 ERROR org.apache.flink.runtime.taskexecutor.TaskManagerRunner  [] - Terminating TaskManagerRunner with exit code 1.
org.apache.flink.util.FlinkException: Failed to start the TaskManagerRunner.
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:483) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.lambda$runTaskManagerProcessSecurely$5(TaskManagerRunner.java:525) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:28) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:525) [flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManagerProcessSecurely(TaskManagerRunner.java:505) [flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.main(TaskManagerRunner.java:463) [flink-dist-1.15.1.jar:1.15.1]
Caused by: java.io.IOException: Could not create the working directory C:\Users\MIKE~1\AppData\Local\Temp\tm_localhost:50329-fc0146.
    at org.apache.flink.runtime.entrypoint.WorkingDirectory.createDirectory(WorkingDirectory.java:58) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.entrypoint.WorkingDirectory.(WorkingDirectory.java:39) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.entrypoint.WorkingDirectory.create(WorkingDirectory.java:88) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils.lambda$createTaskManagerWorkingDirectory$0(ClusterEntrypointUtils.java:152) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.entrypoint.DeterminismEnvelope.map(DeterminismEnvelope.java:49) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.entrypoint.ClusterEntrypointUtils.createTaskManagerWorkingDirectory(ClusterEntrypointUtils.java:150) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManagerRunnerServices(TaskManagerRunner.java:210) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.start(TaskManagerRunner.java:288) ~[flink-dist-1.15.1.jar:1.15.1]
    at org.apache.flink.runtime.taskexecutor.TaskManagerRunner.runTaskManager(TaskManagerRunner.java:481) ~[flink-dist-1.15.1.jar:1.15.1]
    ... 5 more
2022-07-12 11:25:08,700 INFO  org.apache.flink.runtime.rpc.akka.AkkaRpcService [] - Stopping Akka RPC service.
2022-07-12 11:25:08,820 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator    [] - Shutting down remote daemon.
2022-07-12 11:25:08,823 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator    [] - Remote daemon shut down; proceeding with flushing remote transports.
2022-07-12 11:25:08,870 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator    [] - Remoting shut down.

 

Another log file:

 

WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jboss.netty.util.internal.ByteBufferUtil (file:/C:/Users/MIKE~1/AppData/Local/Temp/flink-rpc-akka_0dcbf78d-8e4f-4f57-ae73-8cf2bdb0bb61.jar) to method java.nio.DirectByteBuffer.cleaner()
WARNING: Please consider reporting this to the maintainers of org.jboss.netty.util.internal.ByteBufferUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release

 
 

Sent: Monday, July 11, 2022 at 11:36 PM
From: "Alexander Fedulov" 
To: "user" 
Cc: pod...@gmx.com
Subject: Re: Re: [ANNOUNCE] Apache Flink 1.15.1 released





Hi podunk,

please share exceptions that you find in the log/ folder of your Flink distribution.
The Taskmanger 

[Request Review and Approval] Grab on the "Powered By" Page

2022-07-25 Thread Karan Kamath
Hello Team,

My teams at Grab  deploy and operate Flink as a
platform at South East Asia data scale for our engineers, data scientists,
analysts and other data practitioners, and I'd like us to be featured on
the https://flink.apache.org/poweredby.html page with the following text
and attached logo.

"Grab is a leading superapp in Southeast Asia. It provides everyday
services like Deliveries, Mobility, Financial Services, and More. Grab
deploys Flink applications for use cases ranging from online feature
engineering, ad event tracking, surge and ETA calculation to realtime
metrics and monitoring."

(@Thompson (cc'ed here) can provide guidance and approval on the wording
from Grab PR/Legal should this proposal be acceptable.)

Best,
Karan.

-- 
[image: Grab (Singapore) Ltd] 

[image: Facebook]   [image: LinkedIn]
  [image: Instagram]
  [image: Youtube]


Karan Kamath / Engineering Lead
karan.kam...@grab.com

Grab (Singapore) Ltd
9 Straits View, Marina One West Tower
#23-07/12, Singapore 018937
https://www.grab.com

[image: Download on the App Store]
 [image:
Get it on Google Play]  

This e-mail message may contain confidential or legally privileged
information and is intended only for the use of the intended recipient(s).
Any unauthorized disclosure, dissemination, distribution, copying or the
taking of any action in reliance on the information herein is prohibited.
E-mails are not secure and cannot be guaranteed to be error free as they
can be intercepted, amended, or contain viruses. Anyone who communicates
with us by e-mail is deemed to have accepted these risks. Grab is not
responsible for errors or omissions in this message and denies any
responsibility for any damage arising from the use of e-mail. Any opinion
and other statement contained in this message and any attachment are solely
those of the author and do not necessarily represent those of the company.

-- 


By communicating with Grab Inc and/or its subsidiaries, associate 
companies and jointly controlled entities (“Grab Group”), you are deemed to 
have consented to the processing of your personal data as set out in the 
Privacy Notice which can be viewed at https://grab.com/privacy/ 



This email contains confidential information 
and is only for the intended recipient(s). If you are not the intended 
recipient(s), please do not disseminate, distribute or copy this email 
Please notify Grab Group immediately if you have received this by mistake 
and delete this email from your system. Email transmission cannot be 
guaranteed to be secure or error-free as any information therein could be 
intercepted, corrupted, lost, destroyed, delayed or incomplete, or contain 
viruses. Grab Group do not accept liability for any errors or omissions in 
the contents of this email arises as a result of email transmission. All 
intellectual property rights in this email and attachments therein shall 
remain vested in Grab Group, unless otherwise provided by law.



Re: [ANNOUNCE] Apache Flink Kubernetes Operator 1.1.0 released

2022-07-25 Thread Jing Ge
Congrats! Thank you all!

Best regards,
Jing

On Mon, Jul 25, 2022 at 7:51 AM Px New <15701181132mr@gmail.com> wrote:

> 
>
> Yang Wang  于2022年7月25日周一 10:55写道:
>
> > Congrats! Thanks Gyula for driving this release, and thanks to all
> > contributors!
> >
> >
> > Best,
> > Yang
> >
> > Gyula Fóra  于2022年7月25日周一 10:44写道:
> >
> > > The Apache Flink community is very happy to announce the release of
> > Apache
> > > Flink Kubernetes Operator 1.1.0.
> > >
> > > The Flink Kubernetes Operator allows users to manage their Apache Flink
> > > applications and their lifecycle through native k8s tooling like
> kubectl.
> > >
> > > Please check out the release blog post for an overview of the release:
> > >
> > >
> >
> https://flink.apache.org/news/2022/07/25/release-kubernetes-operator-1.1.0.html
> > >
> > > The release is available for download at:
> > > https://flink.apache.org/downloads.html
> > >
> > > Maven artifacts for Flink Kubernetes Operator can be found at:
> > >
> > >
> >
> https://search.maven.org/artifact/org.apache.flink/flink-kubernetes-operator
> > >
> > > Official Docker image for the Flink Kubernetes Operator can be found
> at:
> > > https://hub.docker.com/r/apache/flink-kubernetes-operator
> > >
> > > The full release notes are available in Jira:
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522=12351723
> > >
> > > We would like to thank all contributors of the Apache Flink community
> who
> > > made this release possible!
> > >
> > > Regards,
> > > Gyula Fora
> > >
> >
>