Hi Davide,

I suppose it would be fine. The only difference I can figure out that may
matter is the serialization. Flink uses KryoSerializer as the fallback
serializer if the TypeInformation of the records is not provided, which can
properly process abstract classes. This works well in most cases.

For better compatibility you may want to use a customized serializer. In
such cases you can call SingleOutputStreamOperator#returns(TypeInformation)
with your TypeInformation like:
    input.map(new MapToAnimal()).returns(new AnimalTypeInformation())


On Thu, Nov 10, 2022 at 9:02 AM Davide Bolcioni via user <
user@flink.apache.org> wrote:

> Greetings,
> I am looking at Flink pipeline processing events consumed from a Kafka
> topic, which now needs to also consume events which have a different, but
> related, schema. Traditional Java OOP would suggest transitioning from
>
> class Dog { ... }
> new FilterFunction<Dog> { ... }
>
> to
>
> abstract class Animal { ... }
> class Dog extends Animal { ... }
> class Cat extends Animal { ... }
> new FilterFunction<Animal> { ... }
>
> but I am wondering if there is anything that might surprise the unwary
> down that road, considering that the code base also uses asynchronous
> functions and the broadcast pattern.
>
> Thank you in advance,
> Davide Bolcioni
> --
> There is no place like /home
>

Reply via email to