Github user eliaslevy commented on the issue:
https://github.com/apache/flink/pull/2369
This may be the wrong place to bring this up, but as you are discussing
changes to the Kafka connector API, I think it is worth bring it up.
As I've pointed out elsewhere, the current connector API makes it difficult
to make use of Kafka native serializer or deserializer
(`org.apache.kafka.common.[Serializer, Deserializer]`), which can be configured
via the Kafka client and producer configs.
The connector code assumes that `ConsummerRecord`s and `ProducerRecord`s
are both parametrized as `<byte[], byte[]>`, with the Flink serdes performing
the conversion to/from `byte[]`. This makes it difficult to make use of
Confluent's `KafkaAvroSerializer` and `KafkaAvroDecoder`, which make use of
their [schema
registry](http://docs.confluent.io/3.0.0/schema-registry/docs/serializer-formatter.html#serializer).
If you are going to change the connector API, it would be good to tackle
this issue at the same time to avoid future changes. The connector should
allow the type parametrization of the Kafka consumer and producer, and should
make use of a pass through Flink serde by default.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---