[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r255860546 ## File path: kafka-handler/pom.xml ## @@ -114,8 +114,21 @@ 1.7.25 test + + io.confluent + kafka-streams-avro-serde Review comment: That's just a serializer, not a Kafka **Streams** serde. You can relax the dependency to `kafka-avro-serializer` instead here. That's what I'm saying This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r28896 ## File path: kafka-handler/pom.xml ## @@ -114,8 +114,21 @@ 1.7.25 test + + io.confluent + kafka-streams-avro-serde Review comment: You're not using Kafka Streams or kafka's serde classes Maybe you want `kafka-avro-serializer`? Or the schema registry client? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r28896 ## File path: kafka-handler/pom.xml ## @@ -114,8 +114,21 @@ 1.7.25 test + + io.confluent + kafka-streams-avro-serde Review comment: You're not using Kafka Streams or the serde. Maybe you want `kafka-avro-serializer`? Or the schema registry client? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r28896 ## File path: kafka-handler/pom.xml ## @@ -114,8 +114,21 @@ 1.7.25 test + + io.confluent + kafka-streams-avro-serde Review comment: You're not using Kafka Streams or the serde. Maybe you want `kafka-avro-serializer`? Or the schema registry client like you had before? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r29248 ## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ## @@ -133,12 +134,24 @@ Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema is empty Can not go further"); Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty); LOG.debug("Building Avro Reader with schema {}", schemaFromProperty); - bytesConverter = new AvroBytesConverter(schema); + bytesConverter = getByteConverterForAvroDelegate(schema, tbl); } else { bytesConverter = new BytesWritableConverter(); } } + BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties tbl) { +String avroByteConverterType = tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_TYPE + .getPropName(), "none"); +int avroSkipBytes = Integer.getInteger(tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES + .getPropName(), "5")); +switch ( avroByteConverterType ) { + case "confluent" : return new AvroSkipBytesConverter(schema, 5); + case "skip" : return new AvroSkipBytesConverter(schema, avroSkipBytes); + default : return new AvroBytesConverter(schema); Review comment: Would it be better if this were an enum rather than a string comparison? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r28285 ## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ## @@ -131,20 +131,27 @@ bytesConverter = new TextBytesConverter(); } else if (delegateSerDe.getSerializedClass() == AvroGenericRecordWritable.class) { String schemaFromProperty = tbl.getProperty(AvroSerdeUtils.AvroTableProperties.SCHEMA_LITERAL.getPropName(), ""); - String magicBitProperty = tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_MAGIC_BYTES - -.getPropName(), Boolean.FALSE.toString()); Preconditions.checkArgument(!schemaFromProperty.isEmpty(), "Avro Schema is empty Can not go further"); Schema schema = AvroSerdeUtils.getSchemaFor(schemaFromProperty); LOG.debug("Building Avro Reader with schema {}", schemaFromProperty); - bytesConverter = - (Boolean.valueOf(magicBitProperty)) ? - new ConfluentAvroBytesConverter(schema) : new AvroBytesConverter(schema); + bytesConverter = getByteConverterForAvroDelegate(schema, tbl); } else { bytesConverter = new BytesWritableConverter(); } } + BytesConverter getByteConverterForAvroDelegate(Schema schema, Properties tbl) { +String avroByteConverterType = tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_TYPE + .getPropName(), "none"); +int avroSkipBytes = Integer.getInteger(tbl.getProperty(AvroSerdeUtils.AvroTableProperties.AVRO_SERDE_SKIP_BYTES + .getPropName(), "5")); Review comment: I'm not sure entirely sure this should have a default value This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r254993454 ## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ## @@ -369,6 +379,20 @@ private SubStructObjectInspector(StructObjectInspector baseOI, int toIndex) { } } + static class ConfluentAvroBytesConverter extends AvroBytesConverter { +ConfluentAvroBytesConverter(Schema schema) { + super(schema); +} + +@Override +Decoder getDecoder(byte[] value) { + /** + * Confluent 4 magic bytes that represents Schema ID as Integer. These bits are added before value bytes. + */ + return DecoderFactory.get().binaryDecoder(value, 5, value.length - 5, null); Review comment: > I'd like to see hive download schemas from schema registry Linked to earlier, the `/schema` endpoint should be able to get the schema text, however I suspect that it'll cause an http request per Hive "stage"? For example, N requests if reading from N Kafka partitions? And more requests if trying to do fancy selects, unions, and joins? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r254991986 ## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ## @@ -369,6 +379,20 @@ private SubStructObjectInspector(StructObjectInspector baseOI, int toIndex) { } } + static class ConfluentAvroBytesConverter extends AvroBytesConverter { +ConfluentAvroBytesConverter(Schema schema) { + super(schema); +} + +@Override +Decoder getDecoder(byte[] value) { + /** + * Confluent 4 magic bytes that represents Schema ID as Integer. These bits are added before value bytes. + */ + return DecoderFactory.get().binaryDecoder(value, 5, value.length - 5, null); Review comment: > About the compatibility, there are several compatibility modes in schema registry. It can be set for each topic and you can set BACKWARD compat Right, but Hive will likely only work properly if the config really is set to backwards, or at the least, the schema provided can read all the data for the offsets that are scanned . > with literally different schemas Maybe if more schema data was exposed, this would be easier to handle? For example, the Avro namespace + top-level record name? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r254991986 ## File path: kafka-handler/src/java/org/apache/hadoop/hive/kafka/KafkaSerDe.java ## @@ -369,6 +379,20 @@ private SubStructObjectInspector(StructObjectInspector baseOI, int toIndex) { } } + static class ConfluentAvroBytesConverter extends AvroBytesConverter { +ConfluentAvroBytesConverter(Schema schema) { + super(schema); +} + +@Override +Decoder getDecoder(byte[] value) { + /** + * Confluent 4 magic bytes that represents Schema ID as Integer. These bits are added before value bytes. + */ + return DecoderFactory.get().binaryDecoder(value, 5, value.length - 5, null); Review comment: > About the compatibility, there are several compatibility modes in schema registry. It can be set for each topic and you can set BACKWARD compat Right, but Hive will likely only work properly if the config really is set to backwards, or at the least, the schema provided can read all the data for the offsets that are provided. > with literally different schemas Maybe if more schema data was exposed, this would be easier to handle? For example, the Avro namespace + top-level record name? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent
cricket007 commented on a change in pull request #526: HIVE-21218: KafkaSerDe doesn't support topics created via Confluent URL: https://github.com/apache/hive/pull/526#discussion_r254990871 ## File path: kafka-handler/src/resources/SimpleRecord.avsc ## @@ -0,0 +1,13 @@ +{ + "type" : "record", + "name" : "SimpleRecord", + "namespace" : "org.apache.hadoop.hive.kafka", + "fields" : [ { Review comment: It's not really relevant for this PR, but as part of the DDL, we have `col_comment` ``` CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later) [(col_name data_type [COMMENT col_comment], ... [constraint_specification])] ``` And if you look at the Avro spec, you'll see `doc` can be a field added, much like JavaDoc comments for classes and fields This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services