On the master node, I see this printed over and over in the
mesos-master.WARNING log file:
W0615 06:06:51.211262 8672 hierarchical_allocator_process.hpp:589] Using
the default value of 'refuse_seconds' to create the refused resources
filter because the input value is negative
Here's what I see
Did you look inside all logs? Mesos logs and executor logs?
Thanks
Best Regards
On Mon, Jun 15, 2015 at 7:09 PM, Gary Ogden gog...@gmail.com wrote:
My Mesos cluster has 1.5 CPU and 17GB free. If I set:
conf.set(spark.mesos.coarse, true);
conf.set(spark.cores.max, 1);
in the SparkConf
My Mesos cluster has 1.5 CPU and 17GB free. If I set:
conf.set(spark.mesos.coarse, true);
conf.set(spark.cores.max, 1);
in the SparkConf object, the job will run in the mesos cluster fine.
But if I comment out those settings above so that it defaults to fine
grained, the task never finishes.