On 28 Mar 2014, at 01:44, Tathagata Das <tathagata.das1...@gmail.com> wrote:

> The more I think about it the problem is not about /tmp, its more about the 
> workers not having enough memory. Blocks of received data could be falling 
> out of memory before it is getting processed. 
> BTW, what is the storage level that you are using for your input stream? If 
> you are using MEMORY_ONLY, then try MEMORY_AND_DISK. That is safer because it 
> ensure that if received data falls out of memory it will be at least saved to 
> disk.
> 
> TD
> 

And i saw such errors because of cleaner.rtt.
Thich erases everything. Even needed rdds.



> 
> On Thu, Mar 27, 2014 at 2:29 PM, Scott Clasen <scott.cla...@gmail.com> wrote:
> Heh sorry that wasnt a clear question, I know 'how' to set it but dont know
> what value to use in a mesos cluster, since the processes are running in lxc
> containers they wont be sharing a filesystem (or machine for that matter)
> 
> I cant use an s3n:// url for local dir can I?
> 
> 
> 
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Streaming-Kafka-Mesos-Marathon-strangeness-tp3285p3373.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
> 

Reply via email to