I think this originally comes from the fact that we need to match the input
TypeInfo against the generic signature, for example to figure out what "T"
means in a MapFunction, T>.
That is the reason why Flink can support generic functions even though
there is type erasure at runtime.
Much nice than
On 09.11.2015 08:49, Aljoscha Krettek wrote:
In the case of the TupleTypeInfo subclass it only works because the equals
method of TypleTypeInfo is used, IMHO.
I've overridden the equals method to check specifically for my
implementation and not TupleTypeInfo, implemented a different serializer
I see Gyula’s point. In the case of the TupleTypeInfo subclass it only works
because the equals method of TypleTypeInfo is used, IMHO.
Stupid implementation mistakes should be caught by the Java type checker. I
don’t think it would allows passing a Map to the map method if
the type of the DataS
The reason for input validation is to check if the Function is fully
compatible. Actually only the return types are necessary, but it
prohibits stupid implementation mistakes and undesired behavior.
E.g. if you implement a "class MyMapper extends MapFunctionString>{}" and use it for "env.fromEl
On 08.11.2015 21:28, Gyula Fóra wrote:
Let's say I want to implement my own TupleTypeinfo that handles null
values, and I pass this typeinfo in the returns call of an operation. This
will most likely fail when the next operation validates the input although
I think it shouldn't.
So i just tried t
Hey All,
I am wondering what is the reason why Function input types are validated?
This might become an issue if the user wants to write his own TypeInfo for
a type that flink also handles natively.
Let's say I want to implement my own TupleTypeinfo that handles null
values, and I pass this type