Thank you very much Charles, I got it  :)


On Sat, Jan 31, 2015 at 2:20 AM, Charles Feduke <charles.fed...@gmail.com>
wrote:

> You'll still need to:
>
> import org.apache.spark.SparkContext._
>
> Importing org.apache.spark._ does _not_ recurse into sub-objects or
> sub-packages, it only brings in whatever is at the level of the package or
> object imported.
>
> SparkContext._ has some implicits, one of them for adding groupByKey to an
> RDD[_] IIRC.
>
>
> On Fri Jan 30 2015 at 3:48:22 PM Stephen Boesch <java...@gmail.com> wrote:
>
>> Amit - IJ will not find it until you add the import as Sean mentioned.
>> It includes implicits that intellij will not know about otherwise.
>>
>> 2015-01-30 12:44 GMT-08:00 Amit Behera <amit.bd...@gmail.com>:
>>
>> I am sorry Sean.
>>>
>>> I am developing code in intelliJ Idea. so with the above dependencies I
>>> am not able to find *groupByKey* when I am searching by ctrl+<space>
>>>
>>>
>>> On Sat, Jan 31, 2015 at 2:04 AM, Sean Owen <so...@cloudera.com> wrote:
>>>
>>>> When you post a question anywhere, and say "it's not working", you
>>>> *really* need to say what that means.
>>>>
>>>>
>>>> On Fri, Jan 30, 2015 at 8:20 PM, Amit Behera <amit.bd...@gmail.com>
>>>> wrote:
>>>> > hi all,
>>>> >
>>>> > my sbt file is like this:
>>>> >
>>>> > name := "Spark"
>>>> >
>>>> > version := "1.0"
>>>> >
>>>> > scalaVersion := "2.10.4"
>>>> >
>>>> > libraryDependencies += "org.apache.spark" %% "spark-core" % "1.1.0"
>>>> >
>>>> > libraryDependencies += "net.sf.opencsv" % "opencsv" % "2.3"
>>>> >
>>>> >
>>>> > code:
>>>> >
>>>> > object SparkJob
>>>> > {
>>>> >
>>>> >   def pLines(lines:Iterator[String])={
>>>> >     val parser=new CSVParser()
>>>> >     lines.map(l=>{val vs=parser.parseLine(l)
>>>> >       (vs(0),vs(1).toInt)})
>>>> >   }
>>>> >
>>>> >   def main(args: Array[String]) {
>>>> >     val conf = new SparkConf().setAppName("Spark
>>>> Job").setMaster("local")
>>>> >     val sc = new SparkContext(conf)
>>>> >     val data = sc.textFile("/home/amit/testData.csv").cache()
>>>> >     val result = data.mapPartitions(pLines).groupByKey
>>>> >     //val list = result.filter(x=> {(x._1).contains("24050881")})
>>>> >
>>>> >   }
>>>> >
>>>> > }
>>>> >
>>>> >
>>>> > Here groupByKey is not working . But same thing is working from
>>>> spark-shell.
>>>> >
>>>> > Please help me
>>>> >
>>>> >
>>>> > Thanks
>>>> >
>>>> > Amit
>>>>
>>>
>>>

Reply via email to