[ 
https://issues.apache.org/jira/browse/HUDI-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17405979#comment-17405979
 ] 

ASF GitHub Bot commented on HUDI-2348:
--------------------------------------

pratyakshsharma commented on a change in pull request #3485:
URL: https://github.com/apache/hudi/pull/3485#discussion_r697653489



##########
File path: website/blog/2021-08-16-kafka-custom-deserializer.md
##########
@@ -0,0 +1,47 @@
+---
+title: "Schema evolution with DeltaStreamer using KafkaSource"
+excerpt: "Evolve schema used in Kafkasource of DeltaStreamer to keep data up 
to date with business"
+author: sbernauer
+category: blog
+---
+
+The schema used for data exchange between services can change change rapidly 
with new business requirements.
+Apache Hudi is often used in combination with kafka as a event stream where 
all events are transmitted according to an record schema.
+In our case a Confluent schema registry is used to maintain the schema and as 
schema evolves, newer versions are updated in the schema registry.
+<!--truncate-->
+
+## What do we want to achieve?
+We have multiple instances of DeltaStreamer running, consuming many topics 
with different schemas ingesting to multiple Hudi tables. Deltastreamer is a 
utility in Hudi to assist in ingesting data from multiple sources like DFS, 
kafka, etc into Hudi. If interested, you can read more about DeltaStreamer tool 
[here](https://hudi.apache.org/docs/writing_data#deltastreamer)
+Ideally every Topic should be able to evolve the schema to match new business 
requirements. Consumers start producing data with a new schema version and the 
DeltaStreamer picks up the new schema and ingests the data with the new schema. 
For this to work, we run our DeltaStreamer instances with the latest schema 
version available from the Schema Registry to ensure that we always use the 
freshest schema with all attributes.
+A prerequisites it that all the mentioned Schema evolutions must be 
`BACKWARD_TRANSITIVE` compatible (see [Schema Evolution and Compatibility of 
Avro Schema 
changes](https://docs.confluent.io/platform/current/schema-registry/avro.html). 
This ensures that every record in the kafka topic can always be read using the 
latest schema.
+
+
+## What is the problem?
+The normal operation looks like this. Multiple (or a single) producers write 
records to the kafka topic.
+In regular flow of events, all records are in the same schema v1 and is in 
sync with schema registry.
+![Normal 
operation](/assets/images/blog/kafka-custom-deserializer/normal_operation.png)<br/>
+Things get complicated when a producer switches to a new Writer-Schema v2 (in 
this case `Producer A`). `Producer B` remains on Schema v1. E.g. a attribute 
`myattribute` was added to the schema, resulting in schema version v2.
+Deltastreamer is capable of handling such schema evolution, if all incoming 
records were evolved and serialized with evolved schema. But the complication 
is that, some records are serialized with schema version v1 and some are 
serialized with schema version v2.
+
+![Schema 
evolution](/assets/images/blog/kafka-custom-deserializer/schema_evolution.png)<br/>
+The default deserializer used by Hudi 
`io.confluent.kafka.serializers.KafkaAvroDeserializer` uses the schema that the 
record was serialized with for deserialization. This causes Hudi to get records 
with multiple different schema from the kafka client. E.g. Event #13 has the 
new attribute `myattribute`, Event #14 dont has the new attribute 
`myattribute`. This makes things complicated and error-prone for Hudi.

Review comment:
       Event #14 dont has -> Event #14 does not have




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Publish a blog on schema evolution with KafkaAvroCustomDeserializer
> -------------------------------------------------------------------
>
>                 Key: HUDI-2348
>                 URL: https://issues.apache.org/jira/browse/HUDI-2348
>             Project: Apache Hudi
>          Issue Type: Improvement
>            Reporter: sivabalan narayanan
>            Assignee: sivabalan narayanan
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.10.0
>
>
> Publish a blog on schema evolution with KafkaAvroCustomDeserializer



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to