Hi,

you want to use Table API with dynamic tables [1] and upsert Kafka [2].
This will create an update message in your log-compacted kafka topic for
each changed result, such that this can be used as a key-value store. In
Kafka, the updated record would be appended and the old record would
eventually be removed through log compaction.
Fault tolerance of Flink [3] will guarantee that no records are lost.

Note from personal experience, I'd double-check if you really want to have
an unlimited kafka topic. Costs quickly spiral out of control if you use
kafka as a key/value store, you usually come to better results by using a
proper key/value store. ElasticSearch would be a common alternative that
also allows you to index the fields for the applications.

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/dynamic_tables/
[2]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/table/upsert-kafka/
[3]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/concepts/stateful-stream-processing/#state-persistence

On Tue, Jul 6, 2021 at 4:53 AM vtygoss <vtyg...@126.com> wrote:

> Hi, flink community!
>
>
> I have below scenario in medical field
>
>
> - record is frequently modified and must not be lost
>
> - when record is modified the results previously produced by this record
> should also be modified.
>
> e.g. table A, B, C.  A join B and result is table D, A join C and result
> is table E.  When record a1 in table A changes to a1’, corresponding
> results in table D and E should also be changed.
>
>
> I  consider to use flink + kafka dynamic table to solve this. But
> there’s a potential problem:
>
>
> - records in medical field must not be lost.
>
> - kafka event is append-only, and kafka dynamic table event( INSERT /
> RETRACT / DELETE) will generate new kafka record(even tombstone), the
> volume of kafka topic is unlimited and cannot be expired.
>
>
>
> is there any solution for this problem?  please help to offer some
> advices. thank you very much!
>
>
>
> Best Regards!
>

Reply via email to