Awesome!! will give it a try again. Thanks!!
- Thanks, via mobile, excuse brevity.
On Jun 19, 2016 11:32 AM, "Xiao Li" <[email protected]> wrote:
> Hi, Yash,
>
> It should work.
>
> val df = spark.range(1, 5)
> .select('id + 1 as 'p1, 'id + 2 as 'p2, 'id + 3 as 'p3, 'id + 4 as 'p4,
> 'id + 5 as 'p5, 'id as 'b)
> .selectExpr("p1", "p2", "p3", "p4", "p5", "CAST(b AS STRING) AS
> s").coalesce(1)
>
> df.write.partitionBy("p1", "p2", "p3", "p4",
> "p5").text(dir.getCanonicalPath)
> val newDF = spark.read.text(dir.getCanonicalPath)
> newDF.show()
>
> df.write.partitionBy("p1", "p2", "p3", "p4", "p5")
> .mode(SaveMode.Append).text(dir.getCanonicalPath)
> val newDF2 = spark.read.text(dir.getCanonicalPath)
> newDF2.show()
>
> I tried it. It works well.
>
> Thanks,
>
> Xiao Li
>
> 2016-06-18 8:57 GMT-07:00 Yash Sharma <[email protected]>:
>
>> Hi All,
>> I have been using the parquet append mode for write which works just
>> fine. Just wanted to check if the same is supported for plain text format.
>> The below code blows up with error saying the file already exists.
>>
>>
>>
>> {code}
>> userEventsDF.write.mode("append").partitionBy("year", "month",
>> "date").text(outputDir)
>> or,
>> userEventsDF.write.mode("append").partitionBy("year", "month",
>> "date").format("text").save(outputDir)
>> {code}
>>
>
>