only have one such application / processing step.
> 4) Else go with the custom TypeInfo/Serializer. We can help you to implement
> it. If you can do it yourself, I'd be awesome to put it as a response here
> for other users.
>
> On Mon, Mar 2, 2020 at 11:01 AM Nitish Pant <m
th a solution.
>
> On Mon, Mar 2, 2020 at 10:27 AM Nitish Pant <mailto:nitishpant...@gmail.com>> wrote:
> Hi,
>
> Thanks for the replies. I get that it is not wise to use GenericRecord and
> that is what is causing the Kryo fallback, but then if not this, how should I
&
th to GenericRecord. That would be
>> the recommended way when you have multiple transformations before source and
>> sink.
>>
>> [1]
>> https://github.com/apache/flink/blob/master/flink-connectors/flink-hadoop-compatibility/src/main/java/org/apache/flin
Hi all,
I am trying to work with flink to get avro data from kafka for which the
schemas are stored in kafka schema registry. Since, the producer for kafka is a
totally different service(an MQTT consumer sinked to kafka), I can’t have the
schema with me at the consumer end. I read around and