In Spark 1.6.0 I’m having an issue with saveAsText and write.mode.text where I 
have a data frame with 1M+ rows and then I do:

dataFrame.limit(500).map(_.mkString(“\t”)).toDF(“row”).write.mode(SaveMode.Overwrite).text(“myHDFSFolder/results”)

then when I check for the results file, I see 900+ rows. Doing further analysis 
I found some of the rows are being duplicated.

Does anyone know if this is something that has been reported before?

The only outstanding characteristic of my data is that I have a column that 
exceeds 2000 characters.

Appreciate your help, thanks.

Cheers,
Ricardo Barona

Reply via email to