Setting uid hash for non-legacy sinks

2024-06-06 Thread Salva Alcántara
Hi, I want to add the uid for my Kafka sink in such a way that I can still use the existing savepoint. The problem I'm having is that I cannot set the uid hash. If I try something like this: ``` output.sinkTo(outputSink).setUidHash("xyzb"); ``` I get the following

raw error

2024-06-06 Thread Fokou Toukam, Thierry
Hi everyone, why does flink xonsider densevector or vectors as a raw types'features: RAW('org.apache.flink.ml.linalg.Vector', '...')'? I'm trying to deploy my job on flink and i have this error Server Response: org.apache.flink.runtime.rest.handler.RestHandlerException: Could not execute

RE: Does Application mode support multiple submissions in HA mode?

2024-06-06 Thread Junrui Lee
Currently, Application mode does not support multiple job submissions when HA is enabled. You can check the official documentation for this statement:https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/overview/#application-mode

Does Application mode support multiple submissions in HA mode?

2024-06-06 Thread Steven Chen
Does Application mode support multiple submissions in HA mode?

Re: State leak in tumbling windows

2024-06-06 Thread Yanfei Lei
Hi Adam, Is your job a datastream job or a sql job? After I looked through the window-related code(I'm not particularly familiar with this part of the code), this problem should only exist in datastream. Adam Domanski 于2024年6月3日周一 16:54写道: > > Dear Flink users, > > I spotted the ever growing

Re:Memory table in pyflink

2024-06-06 Thread Xuyang
Hi, Currently, Flink does not have a concept similar to an in-memory table that allows you to temporarily store some data, because Flink itself does not store data and is a computing engine. May I ask what the purpose of using a temporary table is? Would it be possible to use a

Understanding flink-autoscaler behavior

2024-06-06 Thread Chetas Joshi
Hi Community, I want to understand the following logs from the flink-k8s-operator autoscaler. My flink pipeline running on 1.18.0 and using flink-k8s-operator (1.8.0) is not scaling up even though the source vertex is back-pressured. 2024-06-06 21:33:35,270 o.a.f.a.ScalingMetricCollector

Information Request

2024-06-06 Thread Fokou Toukam, Thierry
I want to set up a stream processing environment on a ubuntu server for machine learning. Which version of Apache Flink do you recommend me if i want to use maven? Thierry FOKOU | IT M.A.Sc Student Département de génie logiciel et TI École de technologie supérieure | Université du Québec

Memory table in pyflink

2024-06-06 Thread Phil Stavridis
Hello, I am trying to create an in-memory table in PyFlink to use as a staging table after ingesting some data from Kafka but it doesn’t work as expected. I have used the print connector which prints the results but I need to use a similar connector that stores staging results. I have tried

Re: Force to commit kafka offset when stop a job.

2024-06-06 Thread Zhanghao Chen
Yes, the exact offset position will also be committed when doing the savepoint. Best, Zhanghao Chen From: Lei Wang Sent: Thursday, June 6, 2024 16:54 To: Zhanghao Chen ; ruanhang1...@gmail.com Cc: user Subject: Re: Force to commit kafka offset when stop a job.

[ANNOUNCE] Apache flink-connector-mongodb 1.2.0 released

2024-06-06 Thread Danny Cranmer
The Apache Flink community is very happy to announce the release of Apache flink-connector-mongodb 1.2.0 for Flink 1.18 and 1.19. Apache Flink® is an open-source stream processing framework for distributed, high-performing, always-available, and accurate data streaming applications. The release

Re: Force to commit kafka offset when stop a job.

2024-06-06 Thread Lei Wang
Thanks Zhanghao && Hang. I am familiar with the flink savepoint feature. The exact offset position is stored in savepoint and the job can be resumed from the savepoint using the offset position that is stored in it. But I am not sure whether the exact offset position is committed to kafka when

使用hbase连接器插入数据,一个列族下有多列时如何只更新其中一列

2024-06-06 Thread 谢县东
各位好: flink版本: 1.13.6 我在使用 flink-connector-hbase 连接器,通过flinkSQL 将数据写入hbase,hbase 建表如下: CREATE TABLE hbase_test_db_test_table_xxd ( rowkey STRING, cf1 ROW, PRIMARY KEY (rowkey) NOT ENFORCED ) WITH ( 'connector' = 'hbase-2.2', 'table-name' =