Hi Aeden,
we updated the connector property design in 1.11 [1]. The old
translation layer exists for backwards compatibility and is indicated by
`connector.type=kafka`.
However, `connector = kafka` indicates the new property design and
`key.fields` is only available there. Please check all p
Hi Timo,
Thanks for responding. You're right. So I did update the properties.
>From what I can tell the new design you're referring to uses the
KafkaDynamicTableFactory, which contains the KEY_FIELDS (key.fields)
options, instead of KafkaTableSourceSinkFactoryBase, which doesn't
support those opti
Hi Aeden,
`format.avro-schema` is not required anymore in the new design. The Avro
schema is derived entirely from the table's schema.
Regards,
Timo
On 07.01.21 09:41, Aeden Jameson wrote:
Hi Timo,
Thanks for responding. You're right. So I did update the properties.
From what I can tell
This is somewhat unrelated to the discussion about how to actually do
the triggering when sources shut down, I'll write on that separately. I
just wanted to get this quick thought out.
For letting operators decide whether they actually want to wait for a
final checkpoint, which is relevant at
Hi Team,
I have a requirement to push the flink app logs to Elastic Search for log
management. Can anyone guide if you are already doing this.
I have tried this -
https://cristian.io/post/flink-log4j/
I’m not getting any error for a sample job I tried.
I am using EMR to run Flink 1.9 and Elastic
Hi Amira,
I think the previous topic you are referring to doesn't seem to be related
with your current problem.
Regarding your problem, I'm afraid I don't know the FlinkKafkaConsumer code
too well. Maybe someone else from the community could help?
Best,
Piotrek
śr., 6 sty 2021 o 19:01 BELGHITH
Hi Billy,
I checked the provided example and found it should be a problem of
ContinuousFileReader, and I created an issue for it[1]. For temporarily go
around the issue, I think you may disable the chain of
ContinuousFileReaderOperator with the following operators:
android.disableC
>
> We could introduce an interface, sth like `RequiresFinalization` or
> `FinalizationListener` (all bad names). The operator itself knows when
> it is ready to completely shut down, Async I/O would wait for all
> requests, sink would potentially wait for a given number of checkpoints.
> The inter
Hi,
When you say that the `JobManager` goes down, you're referring to the
fact that the Flink job will finish in a failed state after too many
exceptions have occurred in the `FlinkKafkaConsumer. Is that correct?
I'm afraid right now there is no code path that would allow catching
those `Top
Thanks for starting this discussion (and sorry for probably duplicated
questions, I couldn't find them answered in FLIP or this thread).
1. Option 1 is said to be not preferable because it wastes resources and
adds complexity (new event).
However, the resources would be wasted for a relatively sho
Thanks for your feedbacks.
Please find below my answers:
-Message d'origine-
De : Aljoscha Krettek
Envoyé : jeudi 7 janvier 2021 13:55
À : user@flink.apache.org
Objet : Re: Flink kafka exceptions handling
[EMETTEUR EXTERNE] / [EXTERNAL SENDER]
Soyez vigilant avant d'ouvrir les
Hi Roman,
Very thanks for the feedbacks! I'll try to answer the issues inline:
> 1. Option 1 is said to be not preferable because it wastes resources and adds
> complexity (new event).
> However, the resources would be wasted for a relatively short time until the
> job finishes completely.
Hi Dawid,
The approach you sent indeed solved our problem.
You helped me and my colleague tremendously, great thanks.
Kind regards,
Jakub
From: Dawid Wysakowicz
Sent: Tuesday, January 5, 2021 16:57
To: Jakub N
Cc: user@flink.apache.org
Subject: Re: Flink UDF r
Hi everyone,
I just posted to the Apache BEAM mailing list to find out if there is good
syntax for my usecase. I've been following Flink for he last couple of yours
so that why I'm emailing more here the same email to you aswell.
I've been reading thru both the Beam and the Flink website about
Hi,
Context:
Built a fraud detection kind of app.
Business logic is all fine, but when putting into production, Kafka cluster
is becoming unstable.
The topic to which it wrote have approx 80 events/sec. post running for few
hours Kafka broker indexes are getting corrupted.
Topic config: single p
Brilliant, thank you. That will come in handy. I was looking through docs
hoping there was a way to still specify the schema with no luck. Does such
an option exist?
On Thu, Jan 7, 2021 at 2:33 AM Timo Walther wrote:
> Hi Aeden,
>
> `format.avro-schema` is not required anymore in the new design
I'm trying to setup a Flink 1.12 deployment on a Kubernetes cluster using
custom Docker images (since the official ones aren't out yet). Since the
documentation states that "We generally recommend new users to deploy Flink on
Kubernetes using native Kubernetes deployments", I'm trying out the na
If you are just getting started, running it on Kubernetes could simplify
that logistics and resources needed for getting started.
It also allows you to possibly reuse infrastructure that you may already be
using for other projects and purposes.
If you are just getting started and just learning, t
Thanks for the reply!
Just to clarify, when I talked about standalone deployments I was referring to
standalone Kubernetes deployments. We currently have no interest in running
Flink outside of K8s. I was mostly just curious about the differences in the
native integration vs. standalone deploym
Hello,
I use flink on kubernetes. And the taskmanagers get assigned random uuids. Is
there a way to explicitly configure them to use hostnames instead?
Omkar
Hi,
I'm evaluating Flink for our company's IoT use case and read a blog post
by Fabian Hueske from 2015 [1]. We have a similar situation except the
sensor is sending the value of a cumulative counter instead of a count.
We would like to calculate the sum of deltas of consecutive cumulative
counter
Hi Flink Community Team,
This is a desperate request for your help on below.
I am new to the Flink and trying to use it with Kafka for Event-based data
stream processing in my project. I am struggling using Flink to find solutions
to my requirements of project below:
1. Get all Kafka topic
22 matches
Mail list logo