Any inputs?
In case of following message, is there a way to check which resources is
not sufficient through some logs?
[Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to
ensure that workers are
Figured out the root cause. Master was randomly assigning port to worker
for communication. Because of the firewall on master, worker couldn't
send out messages to master (maybe like resource details). Weird worker
didn't even bother to throw any error also.
On 8/6/2015 3:24 PM, Kushal
Hi
I have a spark/cassandra setup where I am using a spark cassandra java
connector to query on a table. So far, I have 1 spark master node (2
cores) and 1 worker node (4 cores). Both of them have following
spark-env.sh under conf/:
|#!/usr/bin/env bash
export SPARK_LOCAL_IP=127.0.0.1