Hey,
Thanks for the answer. That's what I've been observing but wanted to know
for sure.
Best Regards,
Dom.
Hi Dom,
AFAIK, Table API will apply a key partitioner based on the join key for the
join operator,
[id, data] and [numbeer, metadata] in your case. So the partitioner in the
KeyedStreaem
is not respected.
Best,
Jark
On Thu, 21 Jan 2021 at 21:39, Dominik Wosiński wrote:
> Hey,
> I was
Hey,
I was wondering if that's currently possible to use KeyedStream to create a
properly partitioned Table in Flink 1.11 ? I have a use case where I wanted
to first join two streams using Flink SQL and then process them via
*KeyedProcessFunction.* So I do something like:
implicit val env =