xbaran commented on a change in pull request #526: HIVE-21218: KafkaSerDe
doesn't support topics created via Confluent
URL: https://github.com/apache/hive/pull/526#discussion_r254981251
##########
File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java
##########
@@ -369,6 +379,20 @@ private SubStructObjectInspector(StructObjectInspector
baseOI, int toIndex) {
}
}
+ static class ConfluentAvroBytesConverter extends AvroBytesConverter {
+ ConfluentAvroBytesConverter(Schema schema) {
+ super(schema);
+ }
+
+ @Override
+ Decoder getDecoder(byte[] value) {
+ /**
+ * Confluent 4 magic bytes that represents Schema ID as Integer. These
bits are added before value bytes.
+ */
+ return DecoderFactory.get().binaryDecoder(value, 5, value.length - 5,
null);
Review comment:
The problem is you have to provide schema during `initialize` and there is
no record to read schema ID from at that time. I was thinking about switch
`bytesConverter` to a function/supplier or factory and give it to the
KafkaWritable and resolve byteconverter lazy way. However I'm quite new to Hive
and need to deep more to mechanics how the low level api works. I'd like to see
hive download schemas from schema registry, but I need to think about concept
first.
About the compatibility, there are several compatibility modes in schema
registry. It can be set for each topic and you can set BACKWARD compat for
example.
And yea you can have objects with literally different schemas not just
different versions in one topic and I think this problem Hive cannot address.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services