Hi Flavio!
The cause is usually as the exception method says: Too many duplicate keys.
The side that builds the hash table has one key occurring so often that not
all records with that key fit into memory together, even after multiple
out-of-core recursions.
Here is a list of things to check:
-
Hi to all,
I have this strange exception in my program, do you know what could be the
cause of it?
java.lang.RuntimeException: Hash join exceeded maximum number of
recursions, without reducing partitions enough to be memory resident.
Probably cause: Too many duplicate keys.
at
org.apache.flink.run