Hi Fuyao,
Logically if a system want to support end-to-end exactly once,
it should support transactions:
1. The transactions hold the records, it get pre-committed on snapshot state
and get committed on checkpont succeed.
2. The transaction should still be able to be aborted after pre-committed.
退订
guoxb__...@sina.com
Hi all,
I'm using flink 1.13.5 (as I was originally using the ververica Flink CDC
connector) and am trying to understand something.
I'm just using the Flink SQL CLI at this stage to verify that I can stream
a PostgreSQL table into Flink SQL to compute a continuous materialised
view. I was
??
----
??:
"user-zh"
感谢。是个非常好的实现。在你实现的过程中有什么可以参考的项目和资料吗。我在探索学习这块的内容。
文末丶 <809097...@qq.com.invalid> 于2022年2月10日周四 11:25写道:
>
> https://github.com/DataLinkDC/dlink
> 应该满足你需求
>
>
> --原始邮件--
> 发件人:
>
https://github.com/DataLinkDC/dlink
??
----
??:
"user-zh"
我想实现一个Flink sql客户端,以支持解析一段Flink sql后提交到对应的集群(K8s yarn等)。Flink
sql不需要二次开发支持更多语法特性,能兼容Flink社区版本的语法即可。
部署方式可以支持Native k8s . standalone on K8s .yarn application .perjob
等等。我查看了目前官方的flink sql client.只支持session模式。
请问有没有相关的资料或者开源项目可以进行参考。谢谢。
Hello Community,
I have two questions regarding Flink custom sink with EXACTLY_ONCE semantic.
1. I have a SDK that could publish messages based on HTTP (backed by Oracle
Streaming Service --- very similar to Kafka). This will be my Flink
application’s sink. Is it possible to use this SDK
Hello,
When trying to reproduce a bug, we made a DeserialisationSchema that
throws an exception when a malformed message comes in.
Then, we sent a malformed message together with a number of well formed
messages to see what happens.
valsource= KafkaSource.builder[OurMessage]()
Hi Christopher,
Great to hear you've solved it, and thanks for sharing your findings with
the community!
Indeed RocksDB is a separate component that has to be added as a
dependency.
On Wed, Feb 9, 2022 at 3:55 PM Christopher Gustafson wrote:
> Solved it, and posting here in case anyone run
Is there any way to identify the last message inside RichFunction in BATCH
mode ?
On Wed, Feb 9, 2022 at 8:56 AM saravana...@gmail.com
wrote:
> I am trying to migrate from Flink 1.12.x DataSet api to Flink 1.14.x
> DataStream api. mapPartition is not available in Flink DataStream.
> *Current
I am trying to migrate from Flink 1.12.x DataSet api to Flink 1.14.x
DataStream api. mapPartition is not available in Flink DataStream.
*Current Code using Flink 1.12.x DataSet :*
dataset
.
.mapPartition(new SomeMapParitionFn())
.
public static class SomeMapPartitionFn extends
Hi community,
According to the docs of Flink and RocksDB, if we set
`state.backend.rocksdb.memory.managed` option to `true`, the memory budget of
memtable and block cache will be controlled by WriteBufferManager and Cache,
according to the given ratios.
Based on this premise, how will the
Hello,
Due to a malformed message in our input queue (kafka), our
DeserialisationSchema threw an exception, making our flink application
crash. Since our application was configured to restart, it restarted,
only to reprocess the same malformed message and crash again.
This happened for a
I have a RichAsyncFunction that does async queries to an external database.
I'm using a Guava cache within the Flink app. I'd like this Guava cache to
be serialized with the rest of Flink state in checkpoint/savepoints.
However, RichAsyncFunction doesn't support the state functionality at all.
Solved it, and posting here in case anyone run into the same issue!
Since the Harness uses StreamExecutionEnvironment to set the flink
configurations, you have to set the state backend explicitly, as described here:
Hi everyone,
I am looking into the code of StateFun, trying to understand how it works. I
was trying to run the Harness E2E in my IDE, and tried to change the
StateBackend to rocksdb, at which point I got an error saying it wasn't found.
My first question then becomes, why is this? Shouldn't
Hi Team,
I have a use case where in my Kafka Source, I need to wait for 2 hours
before handling a event. Currently, following is the plan, kindly let me
know if this would work without issues and any gotchas I need to be aware
of.
a) In Kafka consumer deserializer schema, look at the published
Hi Mohan,
It's not clear for me what you're trying to ask for on the Flink User
mailing list. I don't recognize the table that you've included. Based on
previous emails you're asking questions to the Flink user mailing list on a
comparison between Flink and Kafka Connect. The Flink User mailing
我觉得这种方式是可行的,请问一下我应该如何去做,有没有一些资料参考一下呢
Caizhi Weng 于2022年2月9日周三 16:15写道:
> Hi!
>
> Flink 目前没有 http server source / sink。这是一个 OLAP
> 的需求吗?从描述的需求来看,一种更加合理的方式应该是有一个专门的 http server 接受请求,调用 Flink API 运行一个 Flink
> 作业(Flink SQL 可以运行 select 语句),再将结果返回给调用方。
>
> 张锴 于2022年2月9日周三 14:28写道:
>
> >
> >
>
Sorry to have bothered everyone.
This is the obvious solution:
.setDeserializer(KafkaRecordDeserializationSchema.of(new
JSONKeyValueDeserializationSchema(false)))
Regards Hans-Peter
Op di 8 feb. 2022 om 21:56 schreef Roman Khachatryan :
> Hi,
>
> setDeserializer() expects
Hi!
Flink 目前没有 http server source / sink。这是一个 OLAP
的需求吗?从描述的需求来看,一种更加合理的方式应该是有一个专门的 http server 接受请求,调用 Flink API 运行一个 Flink
作业(Flink SQL 可以运行 select 语句),再将结果返回给调用方。
张锴 于2022年2月9日周三 14:28写道:
>
> 业务需求:通过http请求方式将参数传给flink,将参数带入flink程序再把结果以json的形式返回。请问一下以这种方式实时计算,flink是否支持?
>
> flink版本:1.12.1
>
Hi
Convert ??
How does that work?
Can you spare a couple of lines for that?
Regards Hans
Op di 8 feb. 2022 om 21:56 schreef Roman Khachatryan :
> Hi,
>
> setDeserializer() expects KafkaRecordDeserializationSchema;
> JSONKeyValueDeserializationSchema you provided is not compatible with
> it.
>
24 matches
Mail list logo