Re: Spark-on-YARN architecture

2015-03-10 Thread Harika Matha
Thanks for the quick reply.

I am running the application in YARN client mode.
And I want to run the AM on the same node as RM inorder use the node which
otherwise would run AM.

How can I get AM run on the same node as RM?


On Tue, Mar 10, 2015 at 3:49 PM, Sean Owen so...@cloudera.com wrote:

 In YARN cluster mode, there is no Spark master, since YARN is your
 resource manager. Yes you could force your AM somehow to run on the
 same node as the RM, but why -- what do think is faster about that?

 On Tue, Mar 10, 2015 at 10:06 AM, Harika matha.har...@gmail.com wrote:
  Hi all,
 
  I have Spark cluster setup on YARN with 4 nodes(1 master and 3 slaves).
 When
  I run an application, YARN chooses, at random, one Application Master
 from
  among the slaves. This means that my final computation is  being carried
  only on two slaves. This decreases the performance of the cluster.
 
  1. Is this the correct way of configuration? What is the architecture of
  Spark on YARN?
  2. Is there a way in which I can run Spark master, YARN application
 master
  and resource manager on a single node?(so that I can use three other
 nodes
  for the computation)
 
  Thanks
  Harika
 
 
 
 
 
  --
  View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Spark-on-YARN-architecture-tp21986.html
  Sent from the Apache Spark User List mailing list archive at Nabble.com.
 
  -
  To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
  For additional commands, e-mail: user-h...@spark.apache.org
 



Re: Running multiple threads with same Spark Context

2015-02-25 Thread Harika Matha
Hi Yana,

I tried running the program after setting the property
spark.scheduler.mode to FAIR. But the result is same as previous. Are
there any other properties that have to be set?


On Tue, Feb 24, 2015 at 10:26 PM, Yana Kadiyska yana.kadiy...@gmail.com
wrote:

 It's hard to tell. I have not run this on EC2 but this worked for me:

 The only thing that I can think of is that the scheduling mode is set to

- *Scheduling Mode:* FAIR


 val pool: ExecutorService = Executors.newFixedThreadPool(poolSize)
 while_loop to get curr_job
  pool.execute(new ReportJob(sqlContext, curr_job, i))

 class ReportJob(sqlContext:org.apache.spark.sql.hive.HiveContext,query: 
 String,id:Int) extends Runnable with Logging {
   def threadId = (Thread.currentThread.getName() + \t)

   def run() {
 logInfo(s* Running ${threadId} ${id})
 val startTime = Platform.currentTime
 val hiveQuery=query
 val result_set = sqlContext.sql(hiveQuery)
 result_set.repartition(1)
 result_set.saveAsParquetFile(shdfs:///tmp/${id})
 logInfo(s* DONE ${threadId} ${id} time: 
 +(Platform.currentTime-startTime))
   }
 }

 ​

 On Tue, Feb 24, 2015 at 4:04 AM, Harika matha.har...@gmail.com wrote:

 Hi all,

 I have been running a simple SQL program on Spark. To test the
 concurrency,
 I have created 10 threads inside the program, all threads using same
 SQLContext object. When I ran the program on my EC2 cluster using
 spark-submit, only 3 threads were running in parallel. I have repeated the
 test on different EC2 clusters (containing different number of cores) and
 found out that only 3 threads are running in parallel on every cluster.

 Why is this behaviour seen? What does this number 3 specify?
 Is there any configuration parameter that I have to set if I want to run
 more threads concurrently?

 Thanks
 Harika



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Running-multiple-threads-with-same-Spark-Context-tp21784.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org