How do you start the Spark daemon, directly?
https://issues.apache.org/jira/browse/SPARK-11570
If that's the case solution is to start by script, but I didn't read the
whole thing. In my little world (currently 2-machine cluster soon move to
300) I have the same issue with 1.4.1, and I thought it
Just interpreting, if you follow your link through to
https://github.com/apache/spark/pull/5173 they only say they are testing
with 3.4 so I'd say it's a safe bet that only 3.4 is supported, from 1.5
forward.
I hope they retire this saying at year's end, I will use it for the official
last time
Perhaps put the master IP address in this line and try again?
setMaster("spark://:7077").
Replace with hostname, but the way our host files are setup I
have to put the IP address there.
--
View this message in context:
Gotcha, then you are also replacing the cluster IP. Missed that.
I would ask you to post the actual logfiles, not sure I'll be able to help
but hopefully it gives more info that someone can work with :)
--
View this message in context:
I was going off this, not sure if it gives you a clue:
http://doc.akka.io/api/akka/2.4.0/index.html#akka.remote.transport.Transport$$InvalidAssociationException
"Indicates that the association setup request is invalid, and it is
impossible to recover (malformed IP address, hostname, etc.)."
I
I am wondering about the same concept as the OP, did anyone have an answer
for this question? I can't see that Spark has loops built in, except to loop
over a dataset of existing/known size. Thus I often create a "dummy"
ArrayList and pass it to parallelize to control how many times Spark will
run