Hi all,

I have two nodes, one as master(*host1*) and the other as worker(*host2*). I am using the standalone mode.
After starting the master on host1, I run
$ export MASTER=spark://host1:7077
$ bin/run-example SparkPi 10
on host2, but I get this:

14/10/14 21:54:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

And it repeats again and again.

How can I fix this?

Best Regards
Theo

Reply via email to