[ https://issues.apache.org/jira/browse/FLINK-16048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Leonard Xu updated FLINK-16048: ------------------------------- Description: I found SQL Kafka connector can not consume avro data that was serialized by `KafkaAvroSerializer` and only can consume Row data with avro schema because we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de data in `AvroRowFormatFactory`. I think we should support this because `KafkaAvroSerializer` is very common in Kafka. and someone met same question in stackoverflow[1]. [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259] was: KafkaAvroSerializer and AvroRowSerializationSchema I found SQL Kafka connector can not consume avro data that was serialized by `KafkaAvroSerializer` and only can consume Row data with avro schema because we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de data in `AvroRowFormatFactory`. I think we should support this because `KafkaAvroSerializer` is very common in Kafka. and someone met same question in stackoverflow[1]. [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259] > Support read/write confluent schema registry avro data from Kafka > ------------------------------------------------------------------ > > Key: FLINK-16048 > URL: https://issues.apache.org/jira/browse/FLINK-16048 > Project: Flink > Issue Type: Improvement > Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile) > Affects Versions: 1.11.0 > Reporter: Leonard Xu > Priority: Major > Fix For: 1.11.0 > > > I found SQL Kafka connector can not consume avro data that was serialized by > `KafkaAvroSerializer` and only can consume Row data with avro schema because > we use `AvroRowDeserializationSchema/AvroRowSerializationSchema` to se/de > data in `AvroRowFormatFactory`. > I think we should support this because `KafkaAvroSerializer` is very common > in Kafka. > and someone met same question in stackoverflow[1]. > [[1]https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259|https://stackoverflow.com/questions/56452571/caused-by-org-apache-avro-avroruntimeexception-malformed-data-length-is-negat/56478259] -- This message was sent by Atlassian Jira (v8.3.4#803005)