Hi , I am using 16 nodes spark cluster with below config 1. Executor memory 8 GB 2. 5 cores per executor 3. Driver memory 12 GB.
We have streaming job. We do not see problem but sometimes we get exception executor-1 heap memory issue. I am not understanding if data size is same and this job receive a request and process it but suddenly it’s start giving out of memory error . It will throw exception for 1 executor then throw for other executor also and it stop processing the request. Thanks Amit