Dear all, In general case, iterative processing jobs usually contains one reduce task and multiple parallel processing tasks. In some cases, the state size in reduce task may exceeds the memory size, and it seems that flink directly goes to out-of-core mode. I am wondering whether it is meaningful to support distributed shared memory access in order to maintain large states in multiple nodes? Thanks.
Regards, Yingjun
