Thank you. That worked. 2015-02-12 20:03 GMT+04:00 Imran Rashid <iras...@cloudera.com>:
> You need to import the implicit conversions to PairRDDFunctions with > > import org.apache.spark.SparkContext._ > > (note that this requirement will go away in 1.3: > https://issues.apache.org/jira/browse/SPARK-4397) > > On Thu, Feb 12, 2015 at 9:36 AM, Vladimir Protsenko <protsenk...@gmail.com > > wrote: > >> Hi. I am stuck with how to save file to hdfs from spark. >> >> I have written MyOutputFormat extends FileOutputFormat<String, MyObject>, >> then in spark calling this: >> >> rddres.saveAsHadoopFile[MyOutputFormat]("hdfs://localhost/output") or >> rddres.saveAsHadoopFile("hdfs://localhost/output", classOf[String], >> classOf[MyObject], >> classOf[MyOutputFormat]) >> >> where rddres is RDD[(String, MyObject)] from up of transformation >> pipeline. >> >> Compilation error is: /value saveAsHadoopFile is not a member of >> org.apache.spark.rdd.RDD[(String, vlpr.MyObject)]/. >> >> Could someone give me insights on what could be done here to make it >> working? Why it is not a member? Because of wrong types? >> >> >> >> -- >> View this message in context: >> http://apache-spark-user-list.1001560.n3.nabble.com/saveAsHadoopFile-is-not-a-member-of-RDD-String-MyObject-tp21627.html >> Sent from the Apache Spark User List mailing list archive at Nabble.com. >> >> --------------------------------------------------------------------- >> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org >> For additional commands, e-mail: user-h...@spark.apache.org >> >> >