Re: Unable to run a Standalone job

2014-06-05 Thread prabeesh k
try sbt clean command before build the app. or delete .ivy2 ans .sbt folders(not a good methode). Then try to rebuild the project. On Thu, Jun 5, 2014 at 11:45 AM, Sean Owen so...@cloudera.com wrote: I think this is SPARK-1949 again: https://github.com/apache/spark/pull/906 I think this

Re: Unable to run a Standalone job([NOT FOUND ] org.eclipse.jetty.orbit#javax.mail.glassfish;1.4.1.v201005082020)

2014-06-05 Thread Shrikar archak
Hi Prabeesh/ Sean, I tried both the steps you guys mentioned looks like its not able to resolve it. [warn] [NOT FOUND ] org.eclipse.jetty.orbit#javax.transaction;1.1.1.v201105210645!javax.transaction.orbit (131ms) [warn] public: tried [warn]

Re: Unable to run a Standalone job

2014-06-04 Thread Shrikar archak
Hi All, Now that the Spark Version 1.0.0 is release there should not be any problem with the local jars. Shrikars-MacBook-Pro:SimpleJob shrikar$ cat simple.sbt name := Simple Project version := 1.0 scalaVersion := 2.10.4 libraryDependencies ++= Seq(org.apache.spark %% spark-core % 1.0.0,

Re: Unable to run a Standalone job

2014-05-23 Thread Jacek Laskowski
Hi Shrikar, How did you build Spark 1.0.0-SNAPSHOT on your machine? My understanding is that `sbt publishLocal` is not enough and you really need `sbt assembly` instead. Give it a try and report back. As to your build.sbt, upgrade Scala to 2.10.4 and org.apache.spark %% spark-streaming %

Re: Unable to run a Standalone job

2014-05-23 Thread Shrikar archak
Still the same error no change Thanks, Shrikar On Fri, May 23, 2014 at 2:38 PM, Jacek Laskowski ja...@japila.pl wrote: Hi Shrikar, How did you build Spark 1.0.0-SNAPSHOT on your machine? My understanding is that `sbt publishLocal` is not enough and you really need `sbt assembly` instead.

Unable to run a Standalone job

2014-05-22 Thread Shrikar archak
Hi All, I am trying to run the network count example as a seperate standalone job and running into some issues. Environment: 1) Mac Mavericks 2) Latest spark repo from Github. I have a structure like this Shrikars-MacBook-Pro:SimpleJob shrikar$ find . . ./simple.sbt ./src ./src/main

Re: Unable to run a Standalone job

2014-05-22 Thread Tathagata Das
How are you launching the application? sbt run ? spark-submit? local mode or Spark standalone cluster? Are you packaging all your code into a jar? Looks to me that you seem to have spark classes in your execution environment but missing some of Spark's dependencies. TD On Thu, May 22, 2014 at

Re: Unable to run a Standalone job

2014-05-22 Thread Shrikar archak
I am running as sbt run. I am running it locally . Thanks, Shrikar On Thu, May 22, 2014 at 3:53 PM, Tathagata Das tathagata.das1...@gmail.comwrote: How are you launching the application? sbt run ? spark-submit? local mode or Spark standalone cluster? Are you packaging all your code into a

Re: Unable to run a Standalone job

2014-05-22 Thread Tathagata Das
How are you getting Spark with 1.0.0-SNAPSHOT through maven? Did you publish Spark locally which allowed you to use it as a dependency? This is a weird indeed. SBT should take care of all the dependencies of spark. In any case, you can try the last released Spark 0.9.1 and see if the problem

Re: Unable to run a Standalone job

2014-05-22 Thread Shrikar archak
Yes I did a sbt publish-local. Ok I will try with Spark 0.9.1. Thanks, Shrikar On Thu, May 22, 2014 at 8:53 PM, Tathagata Das tathagata.das1...@gmail.comwrote: How are you getting Spark with 1.0.0-SNAPSHOT through maven? Did you publish Spark locally which allowed you to use it as a

Re: Unable to run a Standalone job

2014-05-22 Thread Soumya Simanta
Try cleaning your maven (.m2) and ivy cache. On May 23, 2014, at 12:03 AM, Shrikar archak shrika...@gmail.com wrote: Yes I did a sbt publish-local. Ok I will try with Spark 0.9.1. Thanks, Shrikar On Thu, May 22, 2014 at 8:53 PM, Tathagata Das tathagata.das1...@gmail.com wrote:

Re: Unable to run a Standalone job

2014-05-22 Thread Shrikar archak
Hi, I tried clearing maven and ivy cache and I am a bit confused at this point in time. 1) Running the example from the spark directory and running using bin/run-example. It works fine as well as it prints the word counts. 2) Trying to run the same code as a seperate job. *) Using the latest