Hi. I'm running spark on YaRN without CGroups turned on, and have 2 questions:
1. Does anyone of spark/yarn guarantee that my spark tasks won't eat up more CPU cores than I've assigned? (I assume there are no guarantees, correct?) 2. What is the effect of setting --executor-cores when submitting the job, apart from parallelism, does it constrain executor process anyhow? -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/CGroups-and-Spark-tp27802.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org