Re: Launching Spark Cluster Application through IDE

2015-03-20 Thread Akhil Das
From IntelliJ, you can use the remote debugging feature.
http://stackoverflow.com/questions/19128264/how-to-remote-debug-in-intellij-12-1-4

For remote debugging, you need to pass the following:

-Xdebug -Xrunjdwp:server=y,transport=dt_socket,address=4000,suspend=n

 jvm options and configure your ide on that given port (4000) for remote
debugging.

Thanks
Best Regards

On Fri, Mar 20, 2015 at 9:46 AM, raggy raghav0110...@gmail.com wrote:

 I am trying to debug a Spark Application on a cluster using a master and
 several worker nodes. I have been successful at setting up the master node
 and worker nodes using Spark standalone cluster manager. I downloaded the
 spark folder with binaries and use the following commands to setup worker
 and master nodes. These commands are executed from the spark directory.

 command for launching master

 ./sbin/start-master.sh
 command for launching worker node

 ./bin/spark-class org.apache.spark.deploy.worker.Worker master-URL
 command for submitting application

 ./sbin/spark-submit --class Application --master URL ~/app.jar
 Now, I would like to understand the flow of control through the Spark
 source
 code on the worker nodes when I submit my application(I just want to use
 one
 of the given examples that use reduce()). I am assuming I should setup
 Spark
 on Eclipse. The Eclipse setup link on the Apache Spark website seems to be
 broken. I would appreciate some guidance on setting up Spark and Eclipse to
 enable stepping through Spark source code on the worker nodes.

 If not Eclipse, I would be open to using some other IDE or approach that
 will enable me to step through Spark source code after launching my
 application.

 Thanks!



 --
 View this message in context:
 http://apache-spark-user-list.1001560.n3.nabble.com/Launching-Spark-Cluster-Application-through-IDE-tp22155.html
 Sent from the Apache Spark User List mailing list archive at Nabble.com.

 -
 To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
 For additional commands, e-mail: user-h...@spark.apache.org




Launching Spark Cluster Application through IDE

2015-03-19 Thread raggy
I am trying to debug a Spark Application on a cluster using a master and
several worker nodes. I have been successful at setting up the master node
and worker nodes using Spark standalone cluster manager. I downloaded the
spark folder with binaries and use the following commands to setup worker
and master nodes. These commands are executed from the spark directory.

command for launching master

./sbin/start-master.sh
command for launching worker node

./bin/spark-class org.apache.spark.deploy.worker.Worker master-URL
command for submitting application

./sbin/spark-submit --class Application --master URL ~/app.jar
Now, I would like to understand the flow of control through the Spark source
code on the worker nodes when I submit my application(I just want to use one
of the given examples that use reduce()). I am assuming I should setup Spark
on Eclipse. The Eclipse setup link on the Apache Spark website seems to be
broken. I would appreciate some guidance on setting up Spark and Eclipse to
enable stepping through Spark source code on the worker nodes.

If not Eclipse, I would be open to using some other IDE or approach that
will enable me to step through Spark source code after launching my
application.

Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Launching-Spark-Cluster-Application-through-IDE-tp22155.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org