I'm facing some issues related to schema evolution in combination with the
usage of Json Schemas and I was just wondering whether there are any
recommended best practices.
In particular, I'm using the following code generator:
- https://github.com/joelittlejohn/jsonschema2pojo
Main gotchas so
+1 for supporting defining time attributes on views.
I once encountered the same problem as yours. I did some regular joins and lost
time attribute, and hence I could no longer do window operations in subsequent
logics. I had to output the joined view to Kafka, read from it again, and
define
Hi again Dominik,
So I was able to verify that this particular layout was being copied into
the image within the Dockerfile (specifically into
/flink/lib/log4j-layout-template-json-2.17.1.jar). Typically we've copied
over the actual jar that was built in the image to the appropriate volume
for
The Apache Flink community is very happy to announce the release of Apache
flink-connector-jdbc 3.1.2. This release is compatible with
Apache Flink 1.16, 1.17 and 1.18.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate
Hello
Request you to please share, how to proceed ? I need to create Index with
custom settings (no. of shards and replicas etc.)
Thanks !!
From: Praveen Chandna via user
Sent: Wednesday, February 21, 2024 3:57 PM
To: Praveen Chandna via user
Subject: Flink - OpenSearch connector query
Correct! Building a custom image for the deployment and then copying over the
jar to a specific directory for the FlinkDeployment to use (as the image
contains the legacy Flink jobs/jars as well as those newer ones for the
operator).
> On Feb 22, 2024, at 6:18 AM, dominik.buen...@swisscom.com
Hi Rion
I guess you’re building your own docker image for the deployment right?
For switching to Logback I’m doing the following command (sbt-docker) when
building the image.
val eclasspath = (Compile / externalDependencyClasspath).value
val logbackClassicJar = eclasspath.files.find(file =>
Hi Dominick,
In this case the jobs are running using application-mode. All of these were
previously working as expected for the legacy jobs using the same configuration
(however those were running via Ververica Platform and targeting Flink 1.15.2).
I had somewhat expected similar behaviors but
Posting this to dev as well as it potentially has some implications on
development effort.
What seems to be the problem here is that we cannot control/override
Timestamps/Watermarks/Primary key on VIEWs. It's understandable that you
cannot create a PRIMARY KEY on the view but I think the temporal
可以发送任意内容的邮件到 user-zh-unsubscr...@flink.apache.org 取消订阅来自
user-zh@flink.apache.org 邮件列表的邮件,邮件列表的订阅管理,可以参考[1]
祝好,
[1] https://flink.apache.org/zh/what-is-flink/community/
> 2024年2月20日 下午4:36,任香帅 写道:
>
> 退订
Hi, Dominik.
For data skew, I think you can refer to the tuning and optimization ideas in
Flink SQL [1] and implement it manually through the DataStream API. If it is
simple processing logic and aggregation operations, you can even use the Flink
SQL API directly. Especially the way you manually
11 matches
Mail list logo