Did you try implementing MultipleTextOutputFormat and use SaveAsHadoopFile
with keyClass, valueClass and OutputFormat instead of default parameters?

You need to implement toString for your keyClass and ValueClass inorder to
get field separator other than defaults.

Regards
Dhaval


On Tue, Jun 28, 2016 at 4:44 AM, Radha krishna <grkmc...@gmail.com> wrote:

> Hi,
> i have some files in the hdfs with FS as field separator and RS as record
> separator, i am able to read the files and able to process successfully.
> how can i write the spark DataFrame result into the HDFS file with same
> delimeters (FS as field separator and RS as record separator instead of \n)
> using java
> Can any one suggest..
>
>
> with the below lines i am able to read the content as line separated by RS
> instead of \n
> hadoopConf = new Configuration(jsc.hadoopConfiguration());
> hadoopConf.set("textinputformat.record.delimiter", "\u001e");
>
> i want to write the data back to hdfs with the same line separator
> (RS[\u001e])
>
>
> Thanks & Regards
>    Radha krishna
>
>
>

Reply via email to