Hi Kamal,
In general, tasks are scheduled in topological order, which means the
source tasks will be scheduled first and they will be deployed on different
TM if there are enough TMs. However, as Flink does not guarantee this, in
some special situations such as failover, multiple source tasks may
Hello Shammon,
Thanks for quick reply.
Is there any way to limit only one task executes on each task manager for an
operator like source?
If parallelism is set equal to no. of task managers for an operator then it
can work and keep cluster.evenly-spread-out-slots=true?
Actually there is TCP
Hi Kamal,
Even if `cluster.evenly-spread-out-slots` is set to true, Flink will not
guarantee that same operator multiple tasks are never executed/scheduled on
the same task manager, it just means Flink will use
`LeastUtilizationSlotMatchingStrategy` to find the matching slot for the
task.
As we k
Hi Lars,
You are likely seeing this Kafka client bug:
https://issues.apache.org/jira/browse/KAFKA-13840. The latest versions of
Flink have updated Kafka clients dependency to include this fix.
Best,
Mason
On Thu, Jul 20, 2023 at 9:21 AM Lars Skjærven wrote:
> Hello,
> I experienced
> Coordinat
Hello,
I experienced
CoordinatorNotAvailableException in my flink jobs after our kafka supplier
(aiven) did a maintenance update of the cluster. This update is performed
by starting up new kafka nodes, copying over data, and switching over
internally. The flink jobs runs as expected, with the only
Hi. elaloya.
If you want to log some information about the kafka records, you can add
some logs in KafkaRecordEmitter.
If you want to know the information about the deserialized value, you
should add logs in the avro format.
Best,
Hang
elakiya udhayanan 于2023年7月19日周三 19:44写道:
> Hi Team,
>
> I
Hello,
If property "cluster.evenly-spread-out-slots" is set to TRUE then Flink
guarantees that same operator multiple tasks are never executed/scheduled on
same task manager? Definitely this will depend upon parallelism value used for
an operator and no. of task slots available.
Like in below
Using EBS as checkpoint storage doesn't work in a distributed environment
if you need to move the state between TMs (e.g., for rescaling and
non-local recovery). You'd need something along the lines of RW
multi-attach and set up the volumes in a smart way; it won't be easy to set
up; I'm not aware