Hi,
Have you checked out SchemaRDD?
There should be an examp[le of writing to Parquet files there.
BTW, FYI I was discussing this with the SparlSQL developers last week and
possibly using Apache Gora [0] for achieving this.
HTH
Lewis
[0] http://gora.apache.org


On Wed, Jul 30, 2014 at 5:14 AM, Fengyun RAO <raofeng...@gmail.com> wrote:

> We used mapreduce for ETL and storing results in Avro files, which are
> loaded to hive/impala for query.
>
> Now we are trying to migrate to spark, but didn't find a way to write
> resulting RDD to Avro files.
>
> I wonder if there is a way to make it, or if not, why spark doesn't
> support Avro as well as mapreduce? Are there any plans?
>
> Or what's the recommended way to output spark results with schema? I don't
> think plain text is a good choice.
>



-- 
*Lewis*

Reply via email to