Re: Identifying a flink dashboard

2023-06-28 Thread Mike Phillips

G'day,

The flink and the dashboard are running in k8s and I am not on the same 
network.

We don't have a VPN into the cluster. (Don't ask)

I am not sure how I would access the dashboard without having a port 
forward.


On 28/06/2023 14:39, Schwalbe Matthias wrote:


Good Morning Mike,

As a quick fix, sort of, you could use an Ingress on nginx-ingress 
(instead of the port-forward) and


Add a sub_filter rule to patch the HTML response.

I use this to add a  tag to the header and for the 
Flink-Dashboard I experience no glitches.


As to point 3. … you don’t need to expose that Ingress to the 
internet, but only to the node IP, so it becomes visible only within 
your network, … there is a number of ways doing it


I could elaborate a little more, if interested

Hope this helps

Thias

*From:*Mike Phillips 
*Sent:* Wednesday, June 28, 2023 3:47 AM

G'day Alex,

Thanks!

1 - hmm maybe beyond my capabilities presently

2 - Yuck! :-) Will look at this

3 - Not possible, the dashboards are not accessible via the internet, 
so we use kube and port forward, URL looks like 
http://wobbegong:3/ the port changes


4 - I think this requires the dashboard be internet accessible?

On Tue, 27 Jun 2023 at 17:21, Alexander Fedulov 
 wrote:


Hi Mike,

no, it is currently hard-coded


https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/src/app/app.component.html#L23

Your options are:

1. Contribute a change to make it configurable

2. Use some browser plugin that allows renaming page titles

3. Always use different ports and bookmark the URLs accordingly

4. Use an Ingress in k8s

Best,

Alex

On Tue, 27 Jun 2023 at 05:58, Mike Phillips
 wrote:

G'day all,

Not sure if this is the correct place but...

We have a number of flink dashboards and it is difficult to
know what dashboard we are looking at.

Is there a configurable way to change the 'Apache Flink
Dashboard' heading on the dashboard?

Or some other way of uniquely identifying what dashboard I am
currently looking at?

Flink is running in k8s and we use kubectl port forwarding to
connect to the dashboard so we can't ID using the URL




Re: [Slack] Request to upload new invitation link

2023-06-28 Thread yuxia
Hi, Stephen. 
Welcome to join Flink Slack channel. Here's my invitation link: 
https://join.slack.com/t/apache-flink/shared_invite/zt-1y7kmx7te-zUg1yfLdGu3Th9En_p4n~g
 

Best regards, 
Yuxia 


发件人: "Stephen Chu"  
收件人: "User"  
抄送: "Satyam Shanker" , "Vaibhav Gosain" 
, "Steve Jiang"  
发送时间: 星期四, 2023年 6 月 29日 上午 12:49:21 
主题: [Slack] Request to upload new invitation link 

Hi there, 
I'd love to join the Flink Slack channel, but it seems the link is outdated: [ 
https://join.slack.com/t/apache-flink/shared_invite/zt-1thin01ch-tYuj6Zwu8qf0QsivHY0anw
 | 
https://join.slack.com/t/apache-flink/shared_invite/zt-1thin01ch-tYuj6Zwu8qf0QsivHY0anw
 ] 

Would someone be able to update or send me a new invite link? 

Thanks, 
Stephen 



Re: 回复: Questions regarding adaptive scheduler with YARN and application mode

2023-06-28 Thread Leon Xu
Thank you both. Looks like with the upcoming Flink 1.18 release, we should
build an auto-scaler service to monitor the job and properly adjust the
allocated resource from YARN.


Leon

On Wed, Jun 28, 2023 at 9:08 AM Madan D  wrote:

> Hello Leon,
>
> As described by Chen below Adaptive Scheduler doesn't perform auto scale a
> Flink Job other than allocating the requested slots based on availability.
> Recently we implemented this with EMR managed scaling by combining adaptive
> scheduler since there's no direct support of auto scaling on yarn at Flink.
>
> If you are running an application on infrastructure similar to AWS EMR,
> you can use scaling policies to scale up the cluster and scale down the
> cluster based on requested slots but it wont really work with incoming
> traffic since there's no way of adjusting flink parallelism based on
> incoming traffic.
>
>
> Regards,
> Madan
>
>
> On Wednesday, 28 June 2023 at 08:43:22 am GMT-7, Chen Zhanghao <
> zhanghao.c...@outlook.com> wrote:
>
>
> Hi Leon,
>
> Adaptive scheduler alone cannot autoscale a Flink job. It simply adjusts
> the parallelism of a job based on available slots [1]. To autoscale a job,
> we further need a policy to suggest the recommended resources for the job
> and a mechanism to adjust the allocated resources of the job (aka. available
> slots). For K8s standalone application mode, we can use reactive mode
> coupled with K8s HPA, where HPA collects pod metrics and autoscales the
> number of TMs, and adaptive scheduler rescales job according to the available
> slots. For YARN application mode, reactive mode is not available. However,
> in the coming 1.18 release, we can declare the desired resources through
> REST API to adjust the allocated resources of the job via FLIP-291 [2],
> but you still need a policy to suggest the recommended resources for the
> job and call the API, which you can refer to the autoscaler implemention in
> Flink K8s operator.
>
> [1] Elastic Scaling | Apache Flink
> 
> [2] FLIP-291: Externalized Declarative Resource Management - Apache Flink
> - Apache Software Foundation
> 
> [3] Autoscaler | Apache Flink Kubernetes Operator
> 
>
> Best,
> Zhanghao Chen
> --
> *发件人:* Leon Xu 
> *发送时间:* 2023年6月27日 13:41
> *收件人:* user 
> *主题:* Questions regarding adaptive scheduler with YARN and application
> mode
>
> Hi Flink users,
>
> I am trying to use Adaptive Scheduler to auto scale our Flink streaming
> jobs (NOT batch job). Our jobs are running on YARN with application mode.
> There isn't much doc around how adaptive scheduler works. So I have some
> questions:
>
>
>1. How does Adaptive Scheduler work with YARN/Application mode? If the
>scheduler decides to request more tasks will it trigger the request to YARN
>while the job is already running
>
>2. What's the evaluation criteria to trigger a scale-up ? Is it
>possible to manually trigger a scale-up for testing purposes?
>
>
> Thanks
>


Re: Very long launch of the Flink application in BATCH mode

2023-06-28 Thread Vladislav Keda
Hi Shammon,

When I set log.level=DEBUG I have no more logs except  *2023-06-21
14:51:30,921 DEBUG
org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] -
Trigger heartbeat request.*

Job freezes on stream graph generation. In STREAMING mode the job starts
fast without same problems.

ср, 28 июн. 2023 г. в 06:44, Shammon FY :

> Hi Brendan,
>
> I think you may need to confirm which stage the job is blocked, the client
> is submitting job or resourcemanage is scheduling job or tasks are
> launching in TM? May be you need provide more information to help us to
> figure the issue
>
> Best,
> Shammon FY
>
> On Tuesday, June 27, 2023, Weihua Hu  wrote:
>
>> Hi, Brendan
>>
>> It looks like it's invoking your main method referring to the log. You
>> can add more logs in the main method to figure out which part takes too
>> long.
>>
>> Best,
>> Weihua
>>
>>
>> On Tue, Jun 27, 2023 at 5:06 AM Brendan Cortez <
>> brendan.cortez...@gmail.com> wrote:
>>
>>> No, I'm using a collection source + 20 same JDBC lookups + Kafka sink.
>>>
>>> On Mon, 26 Jun 2023 at 19:17, Yaroslav Tkachenko 
>>> wrote:
>>>
 Hey Brendan,

 Do you use a file source by any chance?

 On Mon, Jun 26, 2023 at 4:31 AM Brendan Cortez <
 brendan.cortez...@gmail.com> wrote:

> Hi all!
>
> I'm trying to submit a Flink Job in Application Mode in the Kubernetes
> cluster.
>
> I see some problems when an application has a big number of operators
> (more than 20 same operators) - it freezes for ~6 minutes after
> *2023-06-21 15:46:45,082 WARN
>  org.apache.flink.connector.kafka.sink.KafkaSinkBuilder   [] - 
> Property
> [transaction.timeout.ms ] not specified.
> Setting it to PT1H*
>  and until
>
> *2023-06-21 15:53:20,002 INFO
>  org.apache.flink.streaming.api.graph.StreamGraphGenerator[] - 
> Disabled
> Checkpointing. Checkpointing is not supported and not needed when 
> executing
> jobs in BATCH mode.*(logs in attachment)
>
> When I set log.level=DEBUG, I see only this message each 10 seconds:
> *2023-06-21 14:51:30,921 DEBUG
> org.apache.flink.runtime.resourcemanager.active.ActiveResourceManager [] -
> Trigger heartbeat request.*
>
> Please, could you help me understand the cause of this problem and how
> to fix it. I use the Flink 1.15.3 version.
>
> Thank you in advance!
>
> Best regards,
> Brendan Cortez.
>



[Slack] Request to upload new invitation link

2023-06-28 Thread Stephen Chu
Hi there,

I'd love to join the Flink Slack channel, but it seems the link is
outdated:
https://join.slack.com/t/apache-flink/shared_invite/zt-1thin01ch-tYuj6Zwu8qf0QsivHY0anw

Would someone be able to update or send me a new invite link?

Thanks,
Stephen


Re: 回复: Questions regarding adaptive scheduler with YARN and application mode

2023-06-28 Thread Madan D via user
 Hello Leon,
As described by Chen below Adaptive Scheduler doesn't perform auto scale a 
Flink Job other than allocating the requested slots based on availability. 
Recently we implemented this with EMR managed scaling by combining adaptive 
scheduler since there's no direct support of auto scaling on yarn at Flink.
If you are running an application on infrastructure similar to AWS EMR, you can 
use scaling policies to scale up the cluster and scale down the cluster based 
on requested slots but it wont really work with incoming traffic since there's 
no way of adjusting flink parallelism based on incoming traffic.

Regards,Madan

On Wednesday, 28 June 2023 at 08:43:22 am GMT-7, Chen Zhanghao 
 wrote:  
 
 Hi Leon,
Adaptive scheduler alone cannot autoscale a Flink job. It simply adjusts the 
parallelism of a job based on available slots [1]. To autoscale a job, we 
further need a policy to suggest the recommended resources for the job and a 
mechanism to adjust the allocated resources of the job (aka. available 
slots).For K8s standalone application mode, we can use reactive mode coupled 
with K8s HPA, where HPA collects pod metrics and autoscales the number of TMs, 
and adaptive scheduler rescales job according to the available slots. For YARN 
application mode, reactive mode is not available. However, in the coming 1.18 
release, we can declare the desired resources through REST API to adjust the 
allocated resources of the job via FLIP-291 [2], but you still need a policy to 
suggest the recommended resources for the job and call the API, which you can 
refer to the autoscaler implemention in Flink K8s operator.
[1] Elastic Scaling | Apache Flink[2] FLIP-291: Externalized Declarative 
Resource Management - Apache Flink - Apache Software Foundation[3] Autoscaler | 
Apache Flink Kubernetes Operator
Best,Zhanghao Chen发件人: Leon Xu 
发送时间: 2023年6月27日 13:41
收件人: user 
主题: Questions regarding adaptive scheduler with YARN and application mode Hi 
Flink users,
I am trying to use Adaptive Scheduler to auto scale our Flink streaming jobs 
(NOT batch job). Our jobs are running on YARN with application mode. There 
isn't much doc around how adaptive scheduler works. So I have some questions:
   
   - How does Adaptive Scheduler work with YARN/Application mode? If the 
scheduler decides to request more tasks will it trigger the request to YARN 
while the job is already running   
   

   - What's the evaluation criteria to trigger a scale-up ? Is it possible to 
manually trigger a scale-up for testing purposes?

Thanks  

回复: Questions regarding adaptive scheduler with YARN and application mode

2023-06-28 Thread Chen Zhanghao
Hi Leon,

Adaptive scheduler alone cannot autoscale a Flink job. It simply adjusts the 
parallelism of a job based on available slots [1]. To autoscale a job, we 
further need a policy to suggest the recommended resources for the job and a 
mechanism to adjust the allocated resources of the job (aka. available slots). 
For K8s standalone application mode, we can use reactive mode coupled with K8s 
HPA, where HPA collects pod metrics and autoscales the number of TMs, and 
adaptive scheduler rescales job according to the available slots. For YARN 
application mode, reactive mode is not available. However, in the coming 1.18 
release, we can declare the desired resources through REST API to adjust the 
allocated resources of the job via FLIP-291 [2], but you still need a policy to 
suggest the recommended resources for the job and call the API, which you can 
refer to the autoscaler implemention in Flink K8s operator.

[1] Elastic Scaling | Apache 
Flink
[2] FLIP-291: Externalized Declarative Resource Management - Apache Flink - 
Apache Software 
Foundation
[3] Autoscaler | Apache Flink Kubernetes 
Operator

Best,
Zhanghao Chen

发件人: Leon Xu 
发送时间: 2023年6月27日 13:41
收件人: user 
主题: Questions regarding adaptive scheduler with YARN and application mode

Hi Flink users,

I am trying to use Adaptive Scheduler to auto scale our Flink streaming jobs 
(NOT batch job). Our jobs are running on YARN with application mode. There 
isn't much doc around how adaptive scheduler works. So I have some questions:


  1.  How does Adaptive Scheduler work with YARN/Application mode? If the 
scheduler decides to request more tasks will it trigger the request to YARN 
while the job is already running

  2.  What's the evaluation criteria to trigger a scale-up ? Is it possible to 
manually trigger a scale-up for testing purposes?

Thanks


Re: Kafka source with idleness and alignment stops consuming

2023-06-28 Thread Alexis Sarda-Espinosa
Hello,

just for completeness, I don't see the problem if I assign a different
alignment group to each source, i.e. using only split-level watermark
alignment.

Regards,
Alexis.

Am Mi., 28. Juni 2023 um 08:13 Uhr schrieb haishui :

> Hi,
> I have the same trouble. This is really a bug.
> `shouldWaitForAlignment` needs to be another change.
>
> By the way, a source will be marked as idle, when the source has waiting
> for alignment for a long time. Is this a bug?
>
>
>
>
>
>
> 在 2023-06-27 23:25:38,"Alexis Sarda-Espinosa" 
> 写道:
>
> Hello,
>
> I am currently evaluating idleness and alignment with Flink 1.17.1 and the
> externalized Kafka connector. My job has 3 sources whose watermark
> strategies are defined like this:
>
> WatermarkStrategy.forBoundedOutOfOrderness(maxAllowedWatermarkDrift)
> .withIdleness(idleTimeout)
> .withWatermarkAlignment("group", maxAllowedWatermarkDrift,
> Duration.ofSeconds(1L))
>
> The max allowed drift is currently 5 seconds, and my sources have an
> idleTimeout of 1, 1.5, and 5 seconds.
>
> What I observe is that, when I restart the job, all sources publish
> messages, but then 2 of them are marked as idle and never resume. I found
> https://issues.apache.org/jira/browse/FLINK-31632, which should be fixed
> in 1.17.1, but I don't think it's the same issue, my logs don't show
> negative values:
>
> 2023-06-27 15:11:42,927 DEBUG
> org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New
> reported watermark=Watermark @ 1687878696690 (2023-06-27 15:11:36.690) from
> subTaskId=1
> 2023-06-27 15:11:43,009 DEBUG
> org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New
> reported watermark=Watermark @ 9223372036854775807 (292278994-08-17
> 07:12:55.807) from subTaskId=0
> 2023-06-27 15:11:43,091 DEBUG
> org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New
> reported watermark=Watermark @ 9223372036854775807 (292278994-08-17
> 07:12:55.807) from subTaskId=0
> 2023-06-27 15:11:43,116 DEBUG
> org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New
> reported watermark=Watermark @ 9223372036854775807 (292278994-08-17
> 07:12:55.807) from subTaskId=0
> 2023-06-27 15:11:43,298 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0, 1]
> 2023-06-27 15:11:43,304 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0]
> 2023-06-27 15:11:43,306 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0]
> 2023-06-27 15:11:43,486 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]
> 2023-06-27 15:11:43,489 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]
> 2023-06-27 15:11:43,492 INFO
>  org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 -
> Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]
>
> Does anyone know if I'm missing something or this is really a bug?
>
> Regards,
> Alexis.
>
>


Re: how to get blackhole connector jar

2023-06-28 Thread Hang Ruan
Hi, longfeng,

I check the blackhole connector document[1] and the blackhole connector is
a build-in connector.
If you can not find this connector in your flink, maybe you should check
the `flink-table-api-java-bridge` jar to find the
`BlackHoleTableSinkFactory`[2].

Best,
Hang

[1]
https://nightlies.apache.org/flink/flink-docs-release-1.12/dev/table/connectors/blackhole.html
[2]
https://github.com/apache/flink/blob/release-1.12/flink-table/flink-table-api-java-bridge/src/main/java/org/apache/flink/table/factories/BlackHoleTableSinkFactory.java

longfeng Xu  于2023年6月28日周三 16:09写道:

> hi guys,
>   when using alibaba flink (version flink 1.12) to running nexmark's
> query0 , it failed blackhole is not a supported sink connector.
>  so how can i upload connector-blackhole like nexmark connector? where to
> download it?
>
> thanks
>


how to get blackhole connector jar

2023-06-28 Thread longfeng Xu
hi guys,
  when using alibaba flink (version flink 1.12) to running nexmark's query0
, it failed blackhole is not a supported sink connector.
 so how can i upload connector-blackhole like nexmark connector? where to
download it?

thanks


RE: Identifying a flink dashboard

2023-06-28 Thread Schwalbe Matthias
Good Morning Mike,

As a quick fix, sort of, you could use an Ingress on nginx-ingress (instead of 
the port-forward) and
Add a sub_filter rule to patch the HTML response.
I use this to add a  tag to the header and for the Flink-Dashboard I 
experience no glitches.

As to point 3. … you don’t need to expose that Ingress to the internet, but 
only to the node IP, so it becomes visible only within your network, … there is 
a number of ways doing it

I could elaborate a little more, if interested

Hope this helps

Thias


From: Mike Phillips 
Sent: Wednesday, June 28, 2023 3:47 AM
To: user@flink.apache.org
Subject: Re: Identifying a flink dashboard

⚠EXTERNAL MESSAGE – CAUTION: Think Before You Click ⚠


G'day Alex,

Thanks!

1 - hmm maybe beyond my capabilities presently
2 - Yuck! :-) Will look at this
3 - Not possible, the dashboards are not accessible via the internet, so we use 
kube and port forward, URL looks like http://wobbegong:3/ the port changes
4 - I think this requires the dashboard be internet accessible?

On Tue, 27 Jun 2023 at 17:21, Alexander Fedulov 
mailto:alexander.fedu...@gmail.com>> wrote:
Hi Mike,

no, it is currently hard-coded
https://github.com/apache/flink/blob/master/flink-runtime-web/web-dashboard/src/app/app.component.html#L23

Your options are:
1. Contribute a change to make it configurable
2. Use some browser plugin that allows renaming page titles
3. Always use different ports and bookmark the URLs accordingly
4. Use an Ingress in k8s

Best,
Alex

On Tue, 27 Jun 2023 at 05:58, Mike Phillips 
mailto:mike.phill...@intellisense.io>> wrote:
G'day all,

Not sure if this is the correct place but...
We have a number of flink dashboards and it is difficult to know what dashboard 
we are looking at.
Is there a configurable way to change the 'Apache Flink Dashboard' heading on 
the dashboard?
Or some other way of uniquely identifying what dashboard I am currently looking 
at?
Flink is running in k8s and we use kubectl port forwarding to connect to the 
dashboard so we can't ID using the URL

--
--
Kind Regards

Mike


--
--
Kind Regards

Mike
Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet 
unter Umständen vertrauliche Mitteilungen. Da die Vertraulichkeit von 
e-Mail-Nachrichten nicht gewährleistet werden kann, übernehmen wir keine 
Haftung für die Gewährung der Vertraulichkeit und Unversehrtheit dieser 
Mitteilung. Bei irrtümlicher Zustellung bitten wir Sie um Benachrichtigung per 
e-Mail und um Löschung dieser Nachricht sowie eventueller Anhänge. Jegliche 
unberechtigte Verwendung oder Verbreitung dieser Informationen ist streng 
verboten.

This message is intended only for the named recipient and may contain 
confidential or privileged information. As the confidentiality of email 
communication cannot be guaranteed, we do not accept any responsibility for the 
confidentiality and the intactness of this message. If you have received it in 
error, please advise the sender by return e-mail and delete this message and 
any attachments. Any unauthorised use or dissemination of this information is 
strictly prohibited.


Re:Kafka source with idleness and alignment stops consuming

2023-06-28 Thread haishui
Hi,
I have the same trouble. This is really a bug. `shouldWaitForAlignment` needs 
to be another change. 


By the way, a source will be marked as idle, when the source has waiting for 
alignment for a long time. Is this a bug? 
















在 2023-06-27 23:25:38,"Alexis Sarda-Espinosa"  写道:

Hello,


I am currently evaluating idleness and alignment with Flink 1.17.1 and the 
externalized Kafka connector. My job has 3 sources whose watermark strategies 
are defined like this:


WatermarkStrategy.forBoundedOutOfOrderness(maxAllowedWatermarkDrift)
.withIdleness(idleTimeout)
.withWatermarkAlignment("group", maxAllowedWatermarkDrift, 
Duration.ofSeconds(1L))



The max allowed drift is currently 5 seconds, and my sources have an 
idleTimeout of 1, 1.5, and 5 seconds.


What I observe is that, when I restart the job, all sources publish messages, 
but then 2 of them are marked as idle and never resume. I found 
https://issues.apache.org/jira/browse/FLINK-31632, which should be fixed in 
1.17.1, but I don't think it's the same issue, my logs don't show negative 
values:


2023-06-27 15:11:42,927 DEBUG 
org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New 
reported watermark=Watermark @ 1687878696690 (2023-06-27 15:11:36.690) from 
subTaskId=1
2023-06-27 15:11:43,009 DEBUG 
org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New 
reported watermark=Watermark @ 9223372036854775807 (292278994-08-17 
07:12:55.807) from subTaskId=0
2023-06-27 15:11:43,091 DEBUG 
org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New 
reported watermark=Watermark @ 9223372036854775807 (292278994-08-17 
07:12:55.807) from subTaskId=0
2023-06-27 15:11:43,116 DEBUG 
org.apache.flink.runtime.source.coordinator.SourceCoordinator:609 - New 
reported watermark=Watermark @ 9223372036854775807 (292278994-08-17 
07:12:55.807) from subTaskId=0
2023-06-27 15:11:43,298 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0, 1]
2023-06-27 15:11:43,304 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0]
2023-06-27 15:11:43,306 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[0]
2023-06-27 15:11:43,486 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]
2023-06-27 15:11:43,489 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]
2023-06-27 15:11:43,492 INFO  
org.apache.flink.runtime.source.coordinator.SourceCoordinator:194 - 
Distributing maxAllowedWatermark=1687878701690 to subTaskIds=[]



Does anyone know if I'm missing something or this is really a bug?


Regards,
Alexis.