[ 
https://issues.apache.org/jira/browse/FLINK-34440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17817076#comment-17817076
 ] 

Kevin Lam commented on FLINK-34440:
-----------------------------------

I just added this Jira issue, following the [Contribution 
Guide.|https://flink.apache.org/how-to-contribute/contribute-code/]

If this ticket warrants a dev@ discussion, I'm happy to open one.

I am happy to work on contributing the code to complete this issue.

Looking forward to hearing others' thoughts!

> Support Debezium Protobuf Confluent Format
> ------------------------------------------
>
>                 Key: FLINK-34440
>                 URL: https://issues.apache.org/jira/browse/FLINK-34440
>             Project: Flink
>          Issue Type: New Feature
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>    Affects Versions: 1.19.0, 1.18.1
>            Reporter: Kevin Lam
>            Priority: Minor
>
> *Motivation*
> Debezium and the Confluent Schema registry can be used to emit Protobuf 
> Encoded messages to Kafka, but Flink does not easily support consuming these 
> messages through a connector.
> *Definition of Done*
> Add a format `debezium-protobuf-confluent` provided by 
> DebeziumProtobufFormatFactory that supports Debezium messages encoded using 
> Protocol Buffer and the Confluent Schema Registry. 
> To consider
>  * Mirror the implementation of the `debezium-avro-confluent` format. First 
> implement a `protobuf-confluent` format similar to the existing [Confluent 
> Avro|https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/connectors/table/formats/avro-confluent/]
>  format that's provided today, which allows reading/writing protobuf using 
> the Confluent Schema Registry



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to