Two questions regarding worker cleanup:
1) Is the best place to enable worker cleanup setting
export SPARK_WORKER_OPTS="-Dspark.worker.cleanup.enabled=true 
-Dspark.worker.cleanup.interval=30" in conf/spark-env.sh for each worker? Or is 
there a better place?
2) I see this has a default TTL of 7 days. I've got some spark streaming jobs 
that run for more than 7 days. Would the spark cleanup honour currently running 
applications and not discard the data, or will it obliterate everything that's 
older than 7 days? If the latter, what's a good approach to clean up 
considering my streaming app will be running for > ttl period? 
Thanks,Ashic.                                     

Reply via email to