You can also check the apache paimon project https://paimon.apache.org/
(previously known as Flink Table Store).
Might help in some scenarios
On Mon, Aug 28, 2023 at 5:05 AM liu ron wrote:
> Hi, Nirmal
>
> Flink SQL is standard ANSI SQL and extends upon it. Flink SQL provides
> rich Join and Agg
> - Why does a Flink `CREATE TABLE` from Protobuf require the entire table
> column structure to be defined in SQL again? Shouldn't fields be inferred
> automatically from the provided protobuf class?
I agree that auto schema inference is a good feature. The reason why
ProtoBuf Format does not h
Hi, longfent.
We could use `withIdleness`[1] to deal with the idle sources.
Best,
Hang
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/event-time/generating_watermarks/#dealing-with-idle-sources
longfeng Xu 于2023年8月27日周日 14:01写道:
> Hello,
>
> The issue I’m e
There's friction with using scala/java protobuf and trying to convert them
into a Flink Table from a DataStream[ProtobufObject].
Scenario:
Input is a DataStream[ProtobufObject] from a kafka topic that we read data
from and deserialise into Protobuf objects (scala case classes or
alternatively Java
Hi, Ravi
I have deep dive into the source code[1], the parallelism of the Sink
operator is consistent with its inputs, so I suggest you check the
parallelism of the upstream operators.
[1]
https://github.com/apache/flink/blob/a68dd419718b4304343c2b27dab94394c88c67b5/flink-table/flink-table-plan
Hi, Nirmal
Flink SQL is standard ANSI SQL and extends upon it. Flink SQL provides rich
Join and Aggregate syntax including Regular Streaming Join, Interval Join,
Temporal Join, Lookup Join[2], Window Join[3], unbounded group aggregate[4]
and window aggregate[5], and so on. Theoretically, it can su
Hello!
We have a use case requirement to implement complex joins and aggregation
on multiple sql tables. Today, it is happening at SQLServer level which is
degrading the performance of SQLServer Database.
Is it a good idea to implement it through Apache Flink Table API for
real-time data joins?
Hi,I am trying to use the Table API which will convert the Table data into
Datastream. API is StreamTableEnvironment.toChangelogStream(Table table).I have
noticed that its parallelism is always single i.e. One (1). How can set more
than one?
If it is intended to execute with a single thread then