Hi Gabor
There should have never been a dependency to the old connector (or a remaining
state) as I removed everything before deploying a new version. That’s where my
confusion is coming from. It crashes when deploying two times the same pipeline
with the same 3.2.0 dependency when reloading fr
Did you get the indication via the line number that matches the implementation
in 3.0.2?
I’ll have to check, I cannot find it anywhere in the classpath and we’re not
using fat jars in app mode. But I see where this is heading. Thanks for
mentioning!
Best,
Dominik Bünzli
Data, Analytics & AI E
Hi Matthias,
Thank you for your reply!
There should not be a dependency for 3.0.x in my docker image, I only add 3.2.0
explicitly. When connecting to the running container I also can’t find any
reference to 3.0.x. I reverted the dependency to 3.0.0-1.17 and it works again.
Could it be related
Dear Flink community
We recently migrated our pipelines from Flink 1.17 to 1.19.0 (and subsequently
to 1.19.1). We are sourcing events from Kafka and write enriched events back to
Kafka. I’m currently using the flink-connector-kafka (3.2.0-1.19). When
initially deploying (via k8s operator), the
Hi Sigalit
Did you add the classpath to the META-INF.services folder of the reporter?
[cid:image001.png@01DAE256.1C0ADDF0]
The content of my file is:
org.apache.flink.metrics.custom.NestedGaugePrometheusReporterFactory
Kind regards
Dominik
From: Sigalit Eliazov
Date: Monday, 29 July 2024 a
Hi Sachin
What exactly does the MyReducer do? Can you provide us with some code?
Just a wild guess from my side, did you check the watermarking? If the
Watermarks aren't progressing there's no way for Flink to know when to emit a
window and therefore you won't see any outgoing events.
Kind Reg
Hi Rion
I guess you’re building your own docker image for the deployment right?
For switching to Logback I’m doing the following command (sbt-docker) when
building the image.
val eclasspath = (Compile / externalDependencyClasspath).value
val logbackClassicJar = eclasspath.files.find(file =>
fi
Hi all,
I am currently facing the problem of having a pipeline (DataStream API) where I
need to split a GenericRecord into its fields and then aggregate all the values
of a particular field into 30 minute windows. Therefore, if I were to use only
a keyBy field name, I would send all the values
Good morning Rion,
Are you in session job mode or application mode? I’ve had some similar issues
(logback) lately and it turned out that I also needed to add the additional
dependencies (I guess JsonTemplateLayout is one of them) to the lib folder of
the deployment.
Kind regards
Dominik
From: