I am setting up a Yarn cluster to run Spark applications on that, but I'm
confused a bit!

Consider I have a 4-node yarn cluster including one resource manager and 3
node manager and spark are installed in all 4 nodes.

Now my question is when I want to submit spark application to yarn cluster,
is it needed spark daemons (both master and slaves) to be running, or not,
running just resource and node managers suffice?

Thanks

Reply via email to