Hi Mohan,

It's not clear for me what you're trying to ask for on the Flink User
mailing list. I don't recognize the table that you've included. Based on
previous emails you're asking questions to the Flink user mailing list on a
comparison between Flink and Kafka Connect. The Flink User mailing list
focuses on answering user related questions related to Flink only, not in
comparison with other tools.

There is an overview of Flink connectors for DataStreamp use cases [1] and
Table API use cases [2].

Best regards,

Martijn

[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/datastream/overview/
[2]
https://nightlies.apache.org/flink/flink-docs-master/docs/connectors/table/overview/

On Wed, 9 Feb 2022 at 07:44, mohan radhakrishnan <
radhakrishnan.mo...@gmail.com> wrote:

> Hi,
>
> The source for Flink ( or even Kafka ) is a problem we find hard to solve.
> This data seems to indicate that the source could be MQ. Is there a need to
> pull from MQ to Hive and then write to Flink ? What can be the flow ?
>
> Kafka connect workers can issue JDBC queries and pull to Kafka. Is there
> an equivalent toolset for Flink ? Should we pull into kafka and pick up
> using Flink( Checkpointing using Kafka consumer offsets). ?
>
>
> Source of continuous data Kafka, File Systems, other message queues Strictly
> Kafka with Kafka Connect serving to address the data into, data out of
> Kafka problem
> Sink for results
> Kafka, other MQs, file system, analytical database, key/value stores,
> stream processor state, and other external systems
>
> Kafka, application state, operational database or any external system
>
> Thanks,
> Mohan
>

Reply via email to