Schema Evolution & Json Schemas

2024-02-22 Thread Salva Alcántara
I'm facing some issues related to schema evolution in combination with the usage of Json Schemas and I was just wondering whether there are any recommended best practices. In particular, I'm using the following code generator: - https://github.com/joelittlejohn/jsonschema2pojo Main gotchas so

Re: Temporal join on rolling aggregate

2024-02-22 Thread mayaming1983
+1 for supporting defining time attributes on views. I once encountered the same problem as yours. I did some regular joins and lost time attribute, and hence I could no longer do window operations in subsequent logics. I had to output the joined view to Kafka, read from it again, and define

Re: Using Custom JSON Formatting with Flink Operator

2024-02-22 Thread Rion Williams
Hi again Dominik, So I was able to verify that this particular layout was being copied into the image within the Dockerfile (specifically into /flink/lib/log4j-layout-template-json-2.17.1.jar). Typically we've copied over the actual jar that was built in the image to the appropriate volume for

[ANNOUNCE] Apache flink-connector-jdbc 3.1.2 released

2024-02-22 Thread Sergey Nuyanzin
The Apache Flink community is very happy to announce the release of Apache flink-connector-jdbc 3.1.2. This release is compatible with Apache Flink 1.16, 1.17 and 1.18. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate

RE: Flink - OpenSearch connector query

2024-02-22 Thread Praveen Chandna via user
Hello Request you to please share, how to proceed ? I need to create Index with custom settings (no. of shards and replicas etc.) Thanks !! From: Praveen Chandna via user Sent: Wednesday, February 21, 2024 3:57 PM To: Praveen Chandna via user Subject: Flink - OpenSearch connector query

Re: Using Custom JSON Formatting with Flink Operator

2024-02-22 Thread Rion Williams
Correct! Building a custom image for the deployment and then copying over the jar to a specific directory for the FlinkDeployment to use (as the image contains the legacy Flink jobs/jars as well as those newer ones for the operator). > On Feb 22, 2024, at 6:18 AM, dominik.buen...@swisscom.com

Re: Using Custom JSON Formatting with Flink Operator

2024-02-22 Thread Dominik.Buenzli
Hi Rion I guess you’re building your own docker image for the deployment right? For switching to Logback I’m doing the following command (sbt-docker) when building the image. val eclasspath = (Compile / externalDependencyClasspath).value val logbackClassicJar = eclasspath.files.find(file =>

Re: Using Custom JSON Formatting with Flink Operator

2024-02-22 Thread Rion Williams
Hi Dominick, In this case the jobs are running using application-mode. All of these were previously working as expected for the legacy jobs using the same configuration (however those were running via Ververica Platform and targeting Flink 1.15.2). I had somewhat expected similar behaviors but

Re: Temporal join on rolling aggregate

2024-02-22 Thread Gyula Fóra
Posting this to dev as well as it potentially has some implications on development effort. What seems to be the problem here is that we cannot control/override Timestamps/Watermarks/Primary key on VIEWs. It's understandable that you cannot create a PRIMARY KEY on the view but I think the temporal

Re: 退订

2024-02-22 Thread Leonard Xu
可以发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 取消订阅来自 user-zh@flink.apache.org 邮件列表的邮件,邮件列表的订阅管理,可以参考[1] 祝好, [1] https://flink.apache.org/zh/what-is-flink/community/ > 2024年2月20日 下午4:36,任香帅 写道: > > 退订

Re:Evenly distributing events with same key

2024-02-22 Thread Xuyang
Hi, Dominik. For data skew, I think you can refer to the tuning and optimization ideas in Flink SQL [1] and implement it manually through the DataStream API. If it is simple processing logic and aggregation operations, you can even use the Flink SQL API directly. Especially the way you manually