Not sure how to change your code because you'd need to generate the keys where 
you get the data. Sorry about that.
I can tell you where to put the code to remap and sort though.

import org.apache.spark.rdd.OrderedRDDFunctions
val res2=reduced_hccg.map(_._2) 
.map( x=> (newkey,x)).sortByKey(true)  //and if you want remap them to remove 
the key that you used for sorting: .map(x=> x._2)

res2.foreach(println)
    val result= res2.mapPartitions(p=>{
   val l=p.toList
   
   val approx=new ListBuffer[(Int)]
   val detail=new ListBuffer[Double]
   for(i<-0 until l.length-1 by 2)
   {
    println(l(i),l(i+1))
    approx+=(l(i),l(i+1))
   
     
   }
   approx.toList.iterator
   detail.toList.iterator
 })
result.foreach(println)

-----Original Message-----
From: yh18190 [mailto:yh18...@gmail.com] 
Sent: March-28-14 5:17 PM
To: u...@spark.incubator.apache.org
Subject: RE: Splitting RDD and Grouping together to perform computation

Hi Andriana,

Thanks for suggestion.Could you please modify my code part where I need to do 
so..I apologise for inconvinience ,becoz i am new to spark I coudnt apply 
appropriately..i would be thankful to you.



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Splitting-RDD-and-Grouping-together-to-perform-computation-tp3153p3452.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to