Dear all,

I am using Spark1.5.2 and Tachyon0.7.1 to run KMeans with
inputRDD.persist(StorageLevel.OFF_HEAP()).

I've set tired storage for Tachyon. It is all right when working set is
smaller than available memory. However, when working set exceeds available
memory, I keep getting errors like below:

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.1 in stage
0.0 (TID 206) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.1 in stage
0.0 (TID 207) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.2 in stage
0.0 (TID 208) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 191.2 in stage
0.0 (TID 209) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_191 not found

16/01/07 04:18:53 INFO scheduler.TaskSetManager: Lost task 197.3 in stage
0.0 (TID 210) on executor 10.149.11.81: java.lang.RuntimeException
(org.apache.spark.storage.BlockNotFoundException: Block rdd_1_197 not found


Can any one give me some suggestions? Thanks a lot!


Best Regards,
Jia

Reply via email to