1. I had never even heard of conf/slaves until this email, and I only see it referenced in the docs next to Spark Standalone, so I doubt that works.
2. Yes. See the --kill option in spark-submit. Also, we're considering dropping the Spark dispatcher in DC/OS in favor of Metronome, which will be our consolidated method of running any one-off jobs. The dispatcher is really just a lesser maintained and more feature-sparse metronome. If I were you, I would look into running Metronome rather than the dispatcher (or just run DC/OS). On Mon, Nov 14, 2016 at 3:10 AM, Yu Wei <yu20...@hotmail.com> wrote: > Hi Guys, > > > Two questions about running spark on mesos. > > 1, Does spark configuration of conf/slaves still work when running spark > on mesos? > > According to my observations, it seemed that conf/slaves still took > effect when running spark-shell. > > However, it doesn't take effect when deploying in cluster mode. > > Is this expected behavior? > > Or did I miss anything? > > > 2, Could I kill submitted jobs when running spark on mesos in cluster mode? > > I launched spark on mesos in cluster mode. Then submitted a long > running job succeeded. > > Then I want to kill the job. > How could I do that? Is there any similar commands as launching spark > on yarn? > > > Thanks, > > Jared, (韦煜) > Software developer > Interested in open source software, big data, Linux > -- Michael Gummelt Software Engineer Mesosphere