Hi Thomas,
In debezium dos says: For the connector to detect and process events from a
heartbeat table, you must add the table to the PostgreSQL publication
specified by the publication.name
Hi Phil,
The kafka configuration keys of ssl maybe not correct. You can refer the
kafka document[1] to get the ssl configurations of client.
[1] https://kafka.apache.org/documentation/#security_configclients
Best,
Zhongqiang Gong
Phil Stavridis 于2024年5月17日周五 01:44写道:
> Hi,
>
> I have a
Hi, Niklas.
The kafka connector version 3.2.0[1] is for Flink 1.19 and it has a vote
thread[2] already. But there is not enough votes,
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-35138
[2] https://lists.apache.org/thread/7shs2wzb0jkfdyst3mh6d9pn3z1bo93c
Niklas Wilcke
Hi, mete.
As Feng Jin said, I think you could make use of the metric `
currentEmitEventTimeLag`.
Besides that, if you develop your job with the DataStream API, you could
add a new operator to handle it by yourself.
Best,
Hang
Feng Jin 于2024年5月17日周五 02:44写道:
> Hi Mete
>
> You can refer to the
看起来和 FLINK-34063 / FLINK-33863 是同样的问题,您可以升级到1.18.2 试试看。
[1] https://issues.apache.org/jira/browse/FLINK-33863
[2] https://issues.apache.org/jira/browse/FLINK-34063
陈叶超 于2024年5月16日周四 16:38写道:
>
> 升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
> 2024-04-09 13:03:48
> java.lang.Exception: Exception while
Hi,
My pipeline step is something like this:
SingleOutputStreamOperator reducedData =
data
.keyBy(new KeySelector())
.window(
TumblingEventTimeWindows.of(Time.seconds(secs)))
.reduce(new DataReducer())
.name("reduce");
This works fine for secs =
Hi Mete
You can refer to the metrics provided by the Kafka source connector.
https://nightlies.apache.org/flink/flink-docs-release-1.19/docs/connectors/datastream/kafka//#monitoring
Best,
Feng
On Thu, May 16, 2024 at 7:55 PM mete wrote:
> Hello,
>
> For an sql application using kafka as
Hello mete.
I found this SO article
https://stackoverflow.com/questions/54293808/measuring-event-time-latency-with-flink-cep
If I'm not mistake, you can use Flink metrics system for operators and get
time of processing event in operator.
On 2024/05/16 11:54:44 mete wrote:
> Hello,
>
> For an
Hi Flink Community !
I am using :
* Flink
* Flink CDC posgtres Connector
* scala + sbt
versions are :
* orgApacheKafkaVersion = "3.2.3"
* flinkVersion = "1.19.0"
* flinkKafkaVersion = "3.0.2-1.18"
* flinkConnectorPostgresCdcVersion = "3.0.1"
* debeziumVersion = "1.9.8.Final"
*
Hi. No I have not changed the protocol.
On Thu, May 16, 2024, 3:20 AM Biao Geng wrote:
> Hi John,
>
> Just want to check, have you ever changed the kafka protocol in your job
> after using the new cluster? The error message shows that it is caused by
> the kafka client and there is a similar
Hi Ahmed,
are you aware of a blocker? I'm also a bit confused that after Flink 1.19 being
available for a month now the connectors still aren't. It would be great to get
some insights or maybe a reference to an issue. From looking at the Github
repos and the Jira I wasn't able to spot
Hello,
For an sql application using kafka as source (and kafka as sink) what would
be the recommended way to monitor for processing delay? For example, i want
to be able to alert if the app has a certain delay compared to some event
time field in the message.
Best,
Mete
Hi,
I have a PyFlink job that needs to read from a Kafka topic and the
communication with the Kafka broker requires SSL.
I have connected to the Kafka cluster with something like this using just
Python.
from confluent_kafka import Consumer, KafkaException, KafkaError
def
Hello!
I have a Flink Job with CEP pattern.
Pattern example:
// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
升级到 flink 1.18.1 ,任务重启状态恢复的话,遇到如下报错:
2024-04-09 13:03:48
java.lang.Exception: Exception while creating StreamOperatorStateContext.
at
org.apache.flink.streaming.api.operators.StreamTaskStateInitializerImpl.streamOperatorStateContext(StreamTaskStateInitializerImpl.java:258)
at
Hello!
I have a Flink Job with CEP pattern.
Pattern example:
// Strict Contiguity
// a b+ c d e
Pattern.begin("a", AfterMatchSkipStrategy.skipPastLastEvent()).where(...)
.next("b").where(...).oneOrMore()
.next("c").where(...)
.next("d").where(...)
Hi John,
Just want to check, have you ever changed the kafka protocol in your job
after using the new cluster? The error message shows that it is caused by
the kafka client and there is a similar error in this issue
17 matches
Mail list logo