Hi,

   A very noob question.. Here is my code in eclipse
import org.apache.spark.SparkContext;
import org.apache.spark.SparkContext._;
object HelloWorld {
    def main(args: Array[String]) {
      println("Hello, world!")
      val sc = new
SparkContext("localhost","wordcount",args(0),Seq(args(1)))
      val file = sc.textFile(args(2))
      file.map(_.split(" ").flatMap(word =>
(word,1))).reduceByKey(_+_).saveAsTextFile(args(3))
    }
  }

I built the path and added the spark jars to the project.. and the errors
got away (Though eclipse was not giving me the methods options instead it
was giving me general method options (like file. will give arr,asof etc
options instead of map etc)...
But.. now I have built a jar.. how do i run it against spark?
Like in hadoop.. all i need to do is create this jar and do hadoop jar
jarname classname args...

also.. what if i had to include third party libraries.. in hadoop i use to
implement toolrunner and use libjars option?
Any suggestions?
Thanks

Reply via email to