Hi, wondering if anyone could help with this. We use ec2 cluster to run spark
apps in standalone mode. The default log info goes to /$spark_folder/work/.
This folder is in the 10G root fs. So it won't take long to fill up the
whole fs.
My goal is
1. move the logging location to /mnt, where we have 37G space.
2. make the log files iterate, meaning the new ones will replace the old
ones after some threshold.
Could you show how to do this? I was trying to change the spark-env.sh file
-- I don't know that's the best way to do it.
Thanks!
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/executor-logging-management-from-python-tp20198.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org