Hi all,

Kurt and I propose to introduce built-in storage support for dynamic
table, a truly unified changelog & table representation, from Flink
SQL’s perspective. We believe this kind of storage will improve the
usability a lot.

We want to highlight some characteristics about this storage:

- It’s a built-in storage for Flink SQL
** Improve usability issues
** Flink DDL is no longer just a mapping, but a real creation for these tables
** Masks & abstracts the underlying technical details, no annoying options

- Supports subsecond streaming write & consumption
** It could be backed by a service-oriented message queue (Like Kafka)
** High throughput scan capability
** Filesystem with columnar formats would be an ideal choice just like
iceberg/hudi does.

- More importantly, in order to solve the cognitive bar, storage needs
to automatically address various Insert/Update/Delete inputs and table
definitions
** Receive any type of changelog
** Table can have primary key or no primary key

Looking forward to your feedback.

[1] 
https://cwiki.apache.org/confluence/display/FLINK/FLIP-188%3A+Introduce+Built-in+Dynamic+Table+Storage

Best,
Jingsong Lee

Reply via email to