[ 
https://issues.apache.org/jira/browse/FLINK-30721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17679191#comment-17679191
 ] 

Martijn Visser commented on FLINK-30721:
----------------------------------------

[~pedromazala] It depends on the level of compatibility Flink wants to offer 
with Apicurio. I have no strong preference from that regard. Perhaps it makes 
sense to first start with the compatibility layer and see if that would 
suffice. Would you be open to create a PR for this?

> The confluent schema registry is not compatible with Apicurio schema registry
> -----------------------------------------------------------------------------
>
>                 Key: FLINK-30721
>                 URL: https://issues.apache.org/jira/browse/FLINK-30721
>             Project: Flink
>          Issue Type: Bug
>          Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>            Reporter: Pedro Mázala
>            Priority: Major
>
> With the format `avro-confluent`, it is impossible to use 
> [Apicurio|https://www.apicur.io/].
> [The 
> code|https://github.com/apache/flink/blob/master/flink-formats/flink-avro-confluent-registry/src/main/java/org/apache/flink/formats/avro/registry/confluent/ConfluentSchemaRegistryCoder.java#L71]
>  reads a 4-byte integer (see `readInt` documentation 
> [here|https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readInt()])
>  and Apicurio stores its schema ids as a long ([8 
> bytes|https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readLong()]).
> The solution could be adding the schema size to be read from the log message.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to