Not sure if I understand your problem well but why don't you create the file locally and then upload to hdfs?
Sent from my iPhone > On 12 Feb, 2016, at 9:09 am, "seb.arzt" <seb.a...@gmail.com> wrote: > > I have an Iterator of several million elements, which unfortunately won't fit > into the driver memory at the same time. I would like to save them as object > file in HDFS: > > Doing so I am running out of memory on the driver: > > Using a stream > > also won't work. I cannot further increase the driver memory. Why doesn't it > work out of the box? Shouldn't lazy evaluation and garbage collection > prevent the program from running out of memory? I could manually split the > Iterator into chunks and serialize each chunk, but it feels wrong. What is > going wrong here? > > > > > -- > View this message in context: > http://apache-spark-user-list.1001560.n3.nabble.com/Convert-Iterable-to-RDD-tp16882p26211.html > Sent from the Apache Spark User List mailing list archive at Nabble.com. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@spark.apache.org > For additional commands, e-mail: user-h...@spark.apache.org > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org