I am sorry Sean.

I am developing code in intelliJ Idea. so with the above dependencies I am
not able to find *groupByKey* when I am searching by ctrl+<space>


On Sat, Jan 31, 2015 at 2:04 AM, Sean Owen <so...@cloudera.com> wrote:

> When you post a question anywhere, and say "it's not working", you
> *really* need to say what that means.
>
> On Fri, Jan 30, 2015 at 8:20 PM, Amit Behera <amit.bd...@gmail.com> wrote:
> > hi all,
> >
> > my sbt file is like this:
> >
> > name := "Spark"
> >
> > version := "1.0"
> >
> > scalaVersion := "2.10.4"
> >
> > libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
> >
> > libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
> >
> >
> > code:
> >
> > object SparkJob
> > {
> >
> >   def pLines(lines:Iterator[String])={
> >     val parser=new CSVParser()
> >     lines.map(l=>{val vs=parser.parseLine(l)
> >       (vs(0),vs(1).toInt)})
> >   }
> >
> >   def main(args: Array[String]) {
> >     val conf = new SparkConf().setAppName("Spark Job").setMaster("local")
> >     val sc = new SparkContext(conf)
> >     val data = sc.textFile("/home/amit/testData.csv").cache()
> >     val result = data.mapPartitions(pLines).groupByKey
> >     //val list = result.filter(x=> {(x._1).contains("24050881")})
> >
> >   }
> >
> > }
> >
> >
> > Here groupByKey is not working . But same thing is working from
> spark-shell.
> >
> > Please help me
> >
> >
> > Thanks
> >
> > Amit
>

Reply via email to