Re: Previously working job fails on Flink 1.2.0

2017-02-21 Thread Steffen Hausmann
Thanks Stefan and Stephan for your comments. I changed the type of the field and now the job seems to be running again. And thanks Robert for filing the Jira! Cheers, Steffen Am 21. Februar 2017 18:36:41 MEZ schrieb Robert Metzger : >I've filed a JIRA for the problem:

Re: Previously working job fails on Flink 1.2.0

2017-02-21 Thread Robert Metzger
I've filed a JIRA for the problem: https://issues.apache.org/jira/browse/FLINK-5874 On Tue, Feb 21, 2017 at 4:09 PM, Stephan Ewen wrote: > @Steffen > > Yes, you can currently not use arrays as keys. There is a check missing > that gives you a proper error message for that. > >

Re: Previously working job fails on Flink 1.2.0

2017-02-21 Thread Stefan Richter
Hi, if you key is a double[], even if the field is a final double[], it is mutable because the array entries can be mutated and maybe that is what happened? You can check if the following two points are in sync, hash-wise: KeyGroupStreamPartitioner::selectChannels and

Re: Previously working job fails on Flink 1.2.0

2017-02-21 Thread Steffen Hausmann
Thanks for these pointers, Stefan. I've started a fresh job and didn't migrate any state from previous execution. Moreover, all the fields of all the events I'm using are declared final. I've set a breakpoint to figure out what event is causing the problem, and it turns out that Flink

Re: Previously working job fails on Flink 1.2.0

2017-02-20 Thread Stefan Richter
Hi, Flink 1.2 is partitioning all keys into key-groups, the atomic units for rescaling. This partitioning is done by hash partitioning and is also in sync with the routing of tuples to operator instances (each parallel instance of a keyed operator is responsible for some range of key groups).