killing it and starting
> it
> again.
>
> Any idea why the resource leak? This message seems to be related to akka
> when I googled. I am using Spark 1.6.2.
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/
I am running a Spark Cluster on Mesos. The module reads data from Kafka as
DirectStream and pushes it into elasticsearch after referring to a redis
for getting Names against IDs.
I have been getting this message in my worker logs.
*16/07/19 11:17:44 ERROR ResourceLeakDetector: LEAK: You are crea
Queue increases.
There is hardly going back from there, other than killing it and starting it
again.
Any idea why the resource leak? This message seems to be related to akka
when I googled. I am using Spark 1.6.2.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/