Re: Kafka connector exception restarting Flink 1.19 pipeline

2024-09-03 Thread Dominik.Buenzli
Hi Gabor There should have never been a dependency to the old connector (or a remaining state) as I removed everything before deploying a new version. That’s where my confusion is coming from. It crashes when deploying two times the same pipeline with the same 3.2.0 dependency when reloading fr

Re: Kafka connector exception restarting Flink 1.19 pipeline

2024-09-03 Thread Dominik.Buenzli
Did you get the indication via the line number that matches the implementation in 3.0.2? I’ll have to check, I cannot find it anywhere in the classpath and we’re not using fat jars in app mode. But I see where this is heading. Thanks for mentioning! Best, Dominik Bünzli Data, Analytics & AI E

Re: Kafka connector exception restarting Flink 1.19 pipeline

2024-09-03 Thread Dominik.Buenzli
Hi Matthias, Thank you for your reply! There should not be a dependency for 3.0.x in my docker image, I only add 3.2.0 explicitly. When connecting to the running container I also can’t find any reference to 3.0.x. I reverted the dependency to 3.0.0-1.17 and it works again. Could it be related

Kafka connector exception restarting Flink 1.19 pipeline

2024-09-02 Thread Dominik.Buenzli
Dear Flink community We recently migrated our pipelines from Flink 1.17 to 1.19.0 (and subsequently to 1.19.1). We are sourcing events from Kafka and write enriched events back to Kafka. I’m currently using the flink-connector-kafka (3.2.0-1.19). When initially deploying (via k8s operator), the

Re: custom metric reporter

2024-07-29 Thread Dominik.Buenzli
Hi Sigalit Did you add the classpath to the META-INF.services folder of the reporter? [cid:image001.png@01DAE256.1C0ADDF0] The content of my file is: org.apache.flink.metrics.custom.NestedGaugePrometheusReporterFactory Kind regards Dominik From: Sigalit Eliazov Date: Monday, 29 July 2024 a

Re: How to debug window step in flink

2024-04-07 Thread Dominik.Buenzli
Hi Sachin What exactly does the MyReducer do? Can you provide us with some code? Just a wild guess from my side, did you check the watermarking? If the Watermarks aren't progressing there's no way for Flink to know when to emit a window and therefore you won't see any outgoing events. Kind Reg

Re: Using Custom JSON Formatting with Flink Operator

2024-02-22 Thread Dominik.Buenzli
Hi Rion I guess you’re building your own docker image for the deployment right? For switching to Logback I’m doing the following command (sbt-docker) when building the image. val eclasspath = (Compile / externalDependencyClasspath).value val logbackClassicJar = eclasspath.files.find(file => fi

Evenly distributing events with same key

2024-02-21 Thread Dominik.Buenzli
Hi all, I am currently facing the problem of having a pipeline (DataStream API) where I need to split a GenericRecord into its fields and then aggregate all the values of a particular field into 30 minute windows. Therefore, if I were to use only a keyBy field name, I would send all the values

Re: Using Custom JSON Formatting with Flink Operator

2024-02-21 Thread Dominik.Buenzli
Good morning Rion, Are you in session job mode or application mode? I’ve had some similar issues (logback) lately and it turned out that I also needed to add the additional dependencies (I guess JsonTemplateLayout is one of them) to the lib folder of the deployment. Kind regards Dominik From: