Hi Charles,

I forgot to mention. But I imported the following

import au.com.bytecode.opencsv.CSVParser

import org.apache.spark._

On Sat, Jan 31, 2015 at 2:09 AM, Charles Feduke <charles.fed...@gmail.com>
wrote:

> Define "not working". Not compiling? If so you need:
>
> import org.apache.spark.SparkContext._
>
>
> On Fri Jan 30 2015 at 3:21:45 PM Amit Behera <amit.bd...@gmail.com> wrote:
>
>> hi all,
>>
>> my sbt file is like this:
>>
>> name := "Spark"
>>
>> version := "1.0"
>>
>> scalaVersion := "2.10.4"
>>
>> libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
>>
>> libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
>>
>>
>> *code:*
>>
>> object SparkJob
>> {
>>
>>   def pLines(lines:Iterator[String])={
>>     val parser=new CSVParser()
>>     lines.map(l=>{val vs=parser.parseLine(l)
>>       (vs(0),vs(1).toInt)})
>>   }
>>
>>   def main(args: Array[String]) {
>>     val conf = new SparkConf().setAppName("Spark Job").setMaster("local")
>>     val sc = new SparkContext(conf)
>>     val data = sc.textFile("/home/amit/testData.csv").cache()
>>     val result = data.mapPartitions(pLines).groupByKey
>>     //val list = result.filter(x=> {(x._1).contains("24050881")})
>>
>>   }
>>
>> }
>>
>>
>> Here groupByKey is not working . But same thing is working from 
>> *spark-shell.*
>>
>> Please help me
>>
>>
>> Thanks
>>
>> Amit
>>
>>

Reply via email to