Dear Team,
I am getting cast exception in flink.
Caused by: org.apache.flink.client.program.ProgramInvocationException: The
main method caused an error: class java.lang.String cannot be cast to class
java.lang.Double (java.lang.String and java.lang.Double are in module
java.base of loader 'bootstr
Dear Team,
After changing the code to the below, error got resolved
Map rules = alerts.entrySet().stream()
.collect(Collectors.toMap(e -> (String) e.getKey(), e -> Double.parseDouble
((String)e.getValue(;
Thanks,
Tauseef
On Tue, 5 Dec 2023 at 14:00, Tauseef Janvekar
wrote:
> Dear Team,
Hi, Jean-Marc Paulin.
The flink-connector-base will not be packaged in the externalized
connectors [1].
The flink-connector-base has been included in flink-dist and we should use
the provided scope in maven for it.
Best,
Hang
[1] https://issues.apache.org/jira/browse/FLINK-30400
Jean-Marc Pauli
Hello Tauseef,
The issue you're encountering is due to the fact that the Properties class
in Java stores both keys and values as Strings. When you are trying to cast
the value directly to Double, it throws a ClassCastException because values
from the properties file are loaded as String and cannot
Hi team,
I'm a newcomer to Flink's window functions, specifically utilizing
TumblingProcessingTimeWindows with a configured window duration of 20
minutes. However, I've noticed an anomaly where the window output occurs
within 16 to 18 minutes. This has left me uncertain about whether I
overlooked a
Hei,
We are tuning some of the flink jobs we have in production and we would
like to know what are the best numbers/considerations for checkpoint
interval. We have set a default of 30 seconds for checkpoint interval and
the checkpoint operation takes around 2 seconds.
We have also enabled incremen
Sorry for the noise. Reverting to version 1.17.2 fixed the issue.
04.12.2023 15:08, Matwey V. Kornilov пишет:
Hello,
I am trying to run very simple job locally via maven exec plugin:
public class DataStreamJob {
public static void main(String[] args) throws Exception {
Hi JM,
The dependency is set here:
https://github.com/apache/flink-connector-kafka/blob/main/flink-connector-kafka/pom.xml#L50-L55
org.apache.flink
flink-connector-base
${flink.version}
provided
We expect that the non-provided dep
Hello,
I have an S3 bucket and I would like to process the objects metainfo
(such as keys (filenames), metainfo, tags, etc.).
I don't care about the objects content since it is irrelevant for my
task. What I want is to construct a data stream where each instance is a
metainfo attached to some
Hi,
I am recently working on syncing my Flink log to Kafka via log4j2 Kafka
appender. I have a log4j2.properties file which works fine locally, say run my
flink fat jar form terminal via following command:
PS D:\repo>>java -cp .\reconciliation-1.0-SNAPSHOT.jar
The log can be synced to Kaf
Hi Matwey,
I think you can customize an inputFormat to meet your needs. And use the
FileSource::forBulkFileFormat interface to create a FileSource;
In the custom inputFormat, you can choose to only read the metadata of the
file without reading its content.
https://github.com/apache/flink/blob/1
Hello,
I recently upgraded Flink (from 1.13.1 -> 1.18.0). I noticed that the jobs in
the Flink console keep changing their order for every refresh. I am wondering
if there is a setting to keep them in the chronological order like the 1.13.1.
Thanks.
Ivan
Hi Flink users,
After upgrading Flink ( from 1.13.1 -> 1.18.0), I noticed the an issue when HA
is enabled.( see exception below). I am using k8s deployment and I clean the
previous configmaps, like leader files etc. I know the pekko is a recently
thing. Can someone share doc on how to use or set
Hi Rui,
Sorry for the late reply. I was suggesting that perhaps we could do some
testing with Kubernetes wrt configuring values for the exponential restart
strategy. We've noticed that the default strategy in 1.17 caused a lot of
requests to the K8s API server for unstable deployments.
However, p
Hi, Ethan,
Do you mean that the jobs of Flink web UI? If I remember correctly, the
order
does change with each refresh, however, there is an option to sort them
using
the provided sort button. If you believe that having a consistent order is
a
significant necessity, maybe you can start a discussio
Ah, I got it. The behavior is slight different than the older version, but I
guess the new one has a sort button on the job names. So I click it and the
order will stay in the same session as opposed to keep moving on every refresh
when no sorted. So I guess it is just a learn curve. Thanks Yuxi
Hi, Oscar.
Just share my thoughts:
Benefits of more aggressive checkpoint:
1. less recovery time as you mentioned (which is also related to data flink
has to rollback to process)
2. less end-to-end latency for checkpoint-bounded sink in exactly-once mode
Costs of more aggressive checkpoint:
1. more
17 matches
Mail list logo