Re: org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

2016-01-09 Thread Gene Pang
Yes, the tiered storage feature in Tachyon can address this issue. Here is a link to more information: http://tachyon-project.org/documentation/Tiered-Storage-on-Tachyon.html Thanks, Gene On Wed, Jan 6, 2016 at 8:44 PM, Ted Yu wrote: > Have you seen this thread ? > >

Re: org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

2016-01-06 Thread Ted Yu
Have you seen this thread ? http://search-hadoop.com/m/q3RTtAiQta22XrCI On Wed, Jan 6, 2016 at 8:41 PM, Jia Zou wrote: > Dear all, > > I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with > inputRDD.persist(StorageLevel.OFF_HEAP()). > > I've set tired storage for

org.apache.spark.storage.BlockNotFoundException in Spark1.5.2+Tachyon0.7.1

2016-01-06 Thread Jia Zou
Dear all, I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with inputRDD.persist(StorageLevel.OFF_HEAP()). I've set tired storage for Tachyon. It is all right when working set is smaller than available memory. However, when working set exceeds available memory, I keep getting errors like