Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-08-03 Thread Sun Rui
--num-executors does not work for Standalone mode. Try --total-executor-cores > On Jul 26, 2016, at 00:17, Mich Talebzadeh wrote: > > Hi, > > > I am doing some tests > > I have started Spark in Standalone mode. > > For simplicity I am using one node only with 8 works and I have 12 cores > >

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-08-03 Thread Michael Gummelt
> but Spark on Mesos is certainly lagging behind Spark on YARN regarding the features Spark uses off the scheduler backends -- security, data locality, queues, etc. If by security you mean Kerberos, we'll be upstreaming that to Apache Spark soon. It's been in DC/OS Spark for a while: https://gith

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread ayan guha
STS works on YARN, as a yarn-client application. One issue: STS is not HA-supported, though there was some discussion to make it HA similar to Hive Server. So what we did is to run sts on multiple nodes and tie them to a load balancer. . On Tue, Jul 26, 2016 at 8:06 AM, Mich Talebzadeh wrote: >

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Mich Talebzadeh
Correction. STS uses the same UI to display details about all processes running against it which is helpful but gets crowded :) Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Mich Talebzadeh
We also should remember that STS is a pretty useful tool. With JDBC you can use beeline, Zeppelin, Squirrel and other tools against it. One thing I like to change is the UI port that the thrift server listens and you can change it at startup using spark.ui.port. This is fixed at thrift startup and

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Jacek Laskowski
On Mon, Jul 25, 2016 at 10:57 PM, Mich Talebzadeh wrote: > Yarn promises the best resource management I believe. Having said that I have > not used Mesos myself. I'm glad you've mentioned it. I think Cloudera (and Hortonworks?) guys are doing a great job with bringing all the features of YARN

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Mich Talebzadeh
Hi, Actually I started STS in local mode and that works. I have not tested yarn modes for STS but certainly it appears that one can run these in any mode one wishes. local mode has its limitation (all in one JPS and not taking advantage of scaling out) but one can run STS in local mode on the s

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Jacek Laskowski
Hi, That's interesting...What holds STS back from working on the other scheduler backends, e.g. YARN or Mesos? I haven't spent much time with it, but thought it's a mere Spark application. The property is spark.deploy.spreadOut = Whether the standalone cluster manager should spread applications o

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Mich Talebzadeh
Thanks. As I understand STS only works in Standalone mode :( Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw * http://talebzadehmich.wordpress.com

Re: Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Jacek Laskowski
Hi, My vague understanding of Spark Standalone is that it will take up all available workers for a Spark application (despite the cmd options). There was a property to disable it. Can't remember it now though. Ps. Yet another reason for YARN ;-) Jacek On 25 Jul 2016 6:17 p.m., "Mich Talebzadeh"

Executors assigned to STS and number of workers in Stand Alone Mode

2016-07-25 Thread Mich Talebzadeh
Hi, I am doing some tests I have started Spark in Standalone mode. For simplicity I am using one node only with 8 works and I have 12 cores In spark-env.sh I set this # Options for the daemons used in the standalone deploy mode export SPARK_WORKER_CORES=1 ##, total number of cores to be used