I just realized that --conf needs to be one key-value pair per line. And
somehow I needed
--conf "spark.cores.max=2" \
However, when it was
--conf "spark.deploy.defaultCores=2" \
then one job would take up all 16 cores on the box.
What's the actual model here?
We've got 10 app
Hi,
I'm running Spark Standalone on a single node with 16 cores. Master and 4
workers are running.
I'm trying to submit two applications via spark-submit and am getting the
following error when submitting the second one: "Initial job has not
accepted any resources; check your cluster UI to ensure