I think docs are correct.  If you follow the example from the docs and add
this import shown below, I believe you will get what your looking for:

// This is used to implicitly convert an RDD to a DataFrame.import
sqlContext.implicits._

You could also simply take your rdd and do the following:

logs.toDF.saveAsParquetFile("s3n://xxx/xxx")


-Todd

On Tue, Apr 14, 2015 at 3:50 AM, pishen tsai <pishe...@gmail.com> wrote:

> OK, it do work.
> Maybe it will be better to update this usage in the official Spark SQL
> tutorial:
> http://spark.apache.org/docs/latest/sql-programming-guide.html
>
> Thanks,
> pishen
>
>
> 2015-04-14 15:30 GMT+08:00 fightf...@163.com <fightf...@163.com>:
>
>> Hiļ¼Œthere
>>
>> If you want to use the saveAsParquetFile, you may want to use
>>     val log_df =  sqlContext.createDataFrame(logs)
>>
>> And then you can issue log_df.saveAsParquetFile (path)
>>
>> Best,
>> Sun.
>>
>> ------------------------------
>> fightf...@163.com
>>
>>
>> *From:* pishen <pishe...@gmail.com>
>> *Date:* 2015-04-14 15:18
>> *To:* user <user@spark.apache.org>
>> *Subject:* Cannot saveAsParquetFile from a RDD of case class
>> Hello,
>>
>> I tried to follow the tutorial of Spark SQL, but is not able to
>> saveAsParquetFile from a RDD of case class.
>> Here is my Main.scala and build.sbt
>> https://gist.github.com/pishen/939cad3da612ec03249f
>>
>> At line 34, compiler said that "value saveAsParquetFile is not a member
>> of org.apache.spark.rdd.RDD[core.Log]"
>>
>> Any suggestion on how to solve this?
>>
>> Thanks,
>> pishen
>>
>> ------------------------------
>> View this message in context: Cannot saveAsParquetFile from a RDD of
>> case class
>> <http://apache-spark-user-list.1001560.n3.nabble.com/Cannot-saveAsParquetFile-from-a-RDD-of-case-class-tp22488.html>
>> Sent from the Apache Spark User List mailing list archive
>> <http://apache-spark-user-list.1001560.n3.nabble.com/> at Nabble.com.
>>
>>
>

Reply via email to