Hi,
I want to add the uid for my Kafka sink in such a way that I can still use
the existing savepoint. The problem I'm having is that I cannot set the uid
hash. If I try something like this:
```
output.sinkTo(outputSink).setUidHash("xyzb");
```
I get the following
Hi everyone, why does flink xonsider densevector or vectors as a raw
types'features: RAW('org.apache.flink.ml.linalg.Vector', '...')'?
I'm trying to deploy my job on flink and i have this error
Server Response:
org.apache.flink.runtime.rest.handler.RestHandlerException: Could not execute
Currently, Application mode does not support multiple job submissions when HA
is enabled. You can check the official documentation for this
statement:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/overview/#application-mode
Does Application mode support multiple submissions in HA mode?
Hi Adam,
Is your job a datastream job or a sql job? After I looked through the
window-related code(I'm not particularly familiar with this part of
the code), this problem should only exist in datastream.
Adam Domanski 于2024年6月3日周一 16:54写道:
>
> Dear Flink users,
>
> I spotted the ever growing
Hi,
Currently, Flink does not have a concept similar to an in-memory table that
allows you to temporarily store some data, because Flink itself does not store
data and is a computing engine.
May I ask what the purpose of using a temporary table is? Would it be possible
to use a
Hi Community,
I want to understand the following logs from the flink-k8s-operator
autoscaler. My flink pipeline running on 1.18.0 and using
flink-k8s-operator (1.8.0) is not scaling up even though the source vertex
is back-pressured.
2024-06-06 21:33:35,270 o.a.f.a.ScalingMetricCollector
I want to set up a stream processing environment on a ubuntu server for machine
learning. Which version of Apache Flink do you recommend me if i want to use
maven?
Thierry FOKOU | IT M.A.Sc Student
Département de génie logiciel et TI
École de technologie supérieure | Université du Québec
Hello,
I am trying to create an in-memory table in PyFlink to use as a staging table
after ingesting some data from Kafka but it doesn’t work as expected.
I have used the print connector which prints the results but I need to use a
similar connector that stores staging results. I have tried
Yes, the exact offset position will also be committed when doing the savepoint.
Best,
Zhanghao Chen
From: Lei Wang
Sent: Thursday, June 6, 2024 16:54
To: Zhanghao Chen ; ruanhang1...@gmail.com
Cc: user
Subject: Re: Force to commit kafka offset when stop a job.
The Apache Flink community is very happy to announce the release of Apache
flink-connector-mongodb 1.2.0 for Flink 1.18 and 1.19.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release
Thanks Zhanghao && Hang.
I am familiar with the flink savepoint feature. The exact offset position
is stored in savepoint and the job can be resumed from the savepoint using
the offset position that is stored in it.
But I am not sure whether the exact offset position is committed to kafka
when
各位好:
flink版本: 1.13.6
我在使用 flink-connector-hbase 连接器,通过flinkSQL 将数据写入hbase,hbase 建表如下:
CREATE TABLE hbase_test_db_test_table_xxd (
rowkey STRING,
cf1 ROW,
PRIMARY KEY (rowkey) NOT ENFORCED
) WITH (
'connector' = 'hbase-2.2',
'table-name' =
13 matches
Mail list logo