Hi,

I am running spark v 1.6.1 on a single machine in standalone mode, having
64GB RAM and 16cores.

I have created five worker instances to create five executor as in
standalone mode, there cannot be more than one executor in one worker node.

*Configuration*:

SPARK_WORKER_INSTANCES 5
SPARK_WORKER_CORE 1
SPARK_MASTER_OPTS "-Dspark.deploy.default.Cores=5"

all other configurations are default in spark_env.sh

I am running a spark streaming direct kafka job at an interval of 1 min,
which takes data from kafka and after some aggregation write the data to
mongo.

*Problems:*

> when I start master and slave, it starts one master process and five
> worker processes. each only consume about 212 MB of ram.when i submit the
> job , it again creates 5 executor processes and 1 job process and also the
> memory uses grows to 8GB in total and keeps growing over time (slowly)
> also when there is no data to process.

I am also unpersisting cached rdd at the end also set spark.cleaner.ttl to
600. but still memory is growing.

> one more thing, I have seen the merged SPARK-1706, then also why i am
> unable to create multiple executor within a worker.and also in
> spark_env.sh file , setting any configuration related to executor comes
> under YARN only mode.

I have also tried running example program but same problem.

Any help would be greatly appreciated,

Thanks




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Job-Keeps-growing-memory-over-time-tp27498.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to