Re: Cores on Master

2014-11-21 Thread Prannoy
Hi,

You can also set the cores in the spark application itself .

http://spark.apache.org/docs/1.0.1/spark-standalone.html

On Wed, Nov 19, 2014 at 6:11 AM, Pat Ferrel-2 [via Apache Spark User List] 
ml-node+s1001560n19238...@n3.nabble.com wrote:

 OK hacking the start-slave.sh did it

 On Nov 18, 2014, at 4:12 PM, Pat Ferrel [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=0 wrote:

 This seems to work only on a ‘worker’ not the master? So I’m back to
 having no way to control cores on the master?

 On Nov 18, 2014, at 3:24 PM, Pat Ferrel [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=1 wrote:

 Looks like I can do this by not using start-all.sh but starting each
 worker separately passing in a '--cores n' to the master? No config/env
 way?

 On Nov 18, 2014, at 3:14 PM, Pat Ferrel [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=2 wrote:

 I see the default and max cores settings but these seem to control total
 cores per cluster.

 My cobbled together home cluster needs the Master to not use all its cores
 or it may lock up (it does other things). Is there a way to control max
 cores used for a particular cluster machine in standalone mode?
 -
 To unsubscribe, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=3
 For additional commands, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=4



 -
 To unsubscribe, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=5
 For additional commands, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=6



 -
 To unsubscribe, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=7
 For additional commands, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=8



 -
 To unsubscribe, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=9
 For additional commands, e-mail: [hidden email]
 http://user/SendEmail.jtp?type=nodenode=19238i=10



 --
  If you reply to this email, your message will be added to the discussion
 below:

 http://apache-spark-user-list.1001560.n3.nabble.com/Cores-on-Master-tp19230p19238.html
  To start a new topic under Apache Spark User List, email
 ml-node+s1001560n1...@n3.nabble.com
 To unsubscribe from Apache Spark User List, click here
 http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_codenode=1code=cHJhbm5veUBzaWdtb2lkYW5hbHl0aWNzLmNvbXwxfC0xNTI2NTg4NjQ2
 .
 NAML
 http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewerid=instant_html%21nabble%3Aemail.namlbase=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespacebreadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml





--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Cores-on-Master-tp19230p19475.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Cores on Master

2014-11-18 Thread Pat Ferrel
I see the default and max cores settings but these seem to control total cores 
per cluster.

My cobbled together home cluster needs the Master to not use all its cores or 
it may lock up (it does other things). Is there a way to control max cores used 
for a particular cluster machine in standalone mode?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Cores on Master

2014-11-18 Thread Pat Ferrel
Looks like I can do this by not using start-all.sh but starting each worker 
separately passing in a '--cores n' to the master? No config/env way?

On Nov 18, 2014, at 3:14 PM, Pat Ferrel p...@occamsmachete.com wrote:

I see the default and max cores settings but these seem to control total cores 
per cluster.

My cobbled together home cluster needs the Master to not use all its cores or 
it may lock up (it does other things). Is there a way to control max cores used 
for a particular cluster machine in standalone mode?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Cores on Master

2014-11-18 Thread Pat Ferrel
This seems to work only on a ‘worker’ not the master? So I’m back to having no 
way to control cores on the master?
 
On Nov 18, 2014, at 3:24 PM, Pat Ferrel p...@occamsmachete.com wrote:

Looks like I can do this by not using start-all.sh but starting each worker 
separately passing in a '--cores n' to the master? No config/env way?

On Nov 18, 2014, at 3:14 PM, Pat Ferrel p...@occamsmachete.com wrote:

I see the default and max cores settings but these seem to control total cores 
per cluster.

My cobbled together home cluster needs the Master to not use all its cores or 
it may lock up (it does other things). Is there a way to control max cores used 
for a particular cluster machine in standalone mode?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Cores on Master

2014-11-18 Thread Pat Ferrel
OK hacking the start-slave.sh did it 

On Nov 18, 2014, at 4:12 PM, Pat Ferrel p...@occamsmachete.com wrote:

This seems to work only on a ‘worker’ not the master? So I’m back to having no 
way to control cores on the master?

On Nov 18, 2014, at 3:24 PM, Pat Ferrel p...@occamsmachete.com wrote:

Looks like I can do this by not using start-all.sh but starting each worker 
separately passing in a '--cores n' to the master? No config/env way?

On Nov 18, 2014, at 3:14 PM, Pat Ferrel p...@occamsmachete.com wrote:

I see the default and max cores settings but these seem to control total cores 
per cluster.

My cobbled together home cluster needs the Master to not use all its cores or 
it may lock up (it does other things). Is there a way to control max cores used 
for a particular cluster machine in standalone mode?
-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org