Re: [ Standalone Spark Cluster ] - Track node status

2016-06-09 Thread Mich Talebzadeh
Hi Rutuja, I am not certain whether such tool exists or not, However, opening a JIRA may be beneficial and would not do any harm. You may look for workaround. Now my understanding is that your need is for monitoring the health of the cluster? HTH Dr Mich Talebzadeh LinkedIn *

Re: [ Standalone Spark Cluster ] - Track node status

2016-06-09 Thread Rutuja Kulkarni
Thanks again Mich! If there does not exist any interface like REST API or CLI for this, I would like to open a JIRA on exposing such a REST interface in SPARK which would list all the worker nodes. Please let me know if this seems to be the right thing to do for the community. Regards, Rutuja

Re: [ Standalone Spark Cluster ] - Track node status

2016-06-08 Thread Mich Talebzadeh
The other way is to log in to the individual nodes and do jps 24819 Worker And you Processes identified as worker Also you can use jmonitor to see what they are doing resource wise You can of course write a small shell script to see if Worker(s) are up and running in every node and alert if

Re: [ Standalone Spark Cluster ] - Track node status

2016-06-08 Thread Rutuja Kulkarni
Thank you for the quick response. So the workers section would list all the running worker nodes in the standalone Spark cluster? I was also wondering if this is the only way to retrieve worker nodes or is there something like a Web API or CLI I could use? Thanks. Regards, Rutuja On Wed, Jun 8,

Re: [ Standalone Spark Cluster ] - Track node status

2016-06-08 Thread Mich Talebzadeh
check port 8080 on the node that you started start-master.sh [image: Inline images 2] HTH Dr Mich Talebzadeh LinkedIn * https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw *

Re: Standalone spark

2015-02-25 Thread Sean Owen
Spark and Hadoop should be listed as 'provided' dependency in your Maven or SBT build. But that should make it available at compile time. On Wed, Feb 25, 2015 at 10:42 PM, boci boci.b...@gmail.com wrote: Hi, I have a little question. I want to develop a spark based application, but spark

Re: Standalone spark

2015-02-25 Thread boci
Thanks dude... I think I will pull up a docker container for integration test -- Skype: boci13, Hangout: boci.b...@gmail.com On Thu, Feb 26, 2015 at 12:22 AM, Sean Owen

Re: Standalone spark

2015-02-25 Thread Sean Owen
Yes, been on the books for a while ... https://issues.apache.org/jira/browse/SPARK-2356 That one just may always be a known 'gotcha' in Windows; it's kind of a Hadoop gotcha. I don't know that Spark 100% works on Windows and it isn't tested on Windows. On Wed, Feb 25, 2015 at 11:05 PM, boci

Re: Standalone Spark program

2014-12-18 Thread Akhil Das
You can build a jar of your project and add it to the sparkContext (sc.addJar(/path/to/your/project.jar)) then it will get shipped to the worker and hence no classNotfoundException! Thanks Best Regards On Thu, Dec 18, 2014 at 10:06 PM, Akshat Aranya aara...@gmail.com wrote: Hi, I am building

Re: Standalone Spark program

2014-12-18 Thread Andrew Or
Hey Akshat, What is the class that is not found, is it a Spark class or classes that you define in your own application? If the latter, then Akhil's solution should work (alternatively you can also pass the jar through the --jars command line option in spark-submit). If it's a Spark class,

Re: Standalone spark cluster. Can't submit job programmatically - java.io.InvalidClassException

2014-09-08 Thread DrKhu
After wasting a lot of time, I've found the problem. Despite I haven't used hadoop/hdfs in my application, hadoop client matters. The problem was in hadoop-client version, it was different than the version of hadoop, spark was built for. Spark's hadoop version 1.2.1, but in my application that was