[ https://issues.apache.org/jira/browse/SPARK-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Sean Owen updated SPARK-10295: ------------------------------ I believe that YARN currently will release executors even if they have cached data. I also recall that there's a desire to change this behavior, so that executors may stick around with cached data. I am not sure what the current or intended Mesos behavior is, but assume it's the same. Therefore, this message may need to be softened to something like "Dynamic allocation is enabled; executors may be removed even when they contain cached data" or something similar. I don't think there are hard guarantees about the behavior in any event, and the intent is just to make the user aware that it's possible for cached data to go away with dynamic allocation on. CC [~vanzin] and [~sandyr] > Dynamic allocation in Mesos does not release when RDDs are cached > ----------------------------------------------------------------- > > Key: SPARK-10295 > URL: https://issues.apache.org/jira/browse/SPARK-10295 > Project: Spark > Issue Type: Question > Components: Mesos > Affects Versions: 1.5.0 > Environment: Spark 1.5.0 RC1 > Centos 6 > java 7 oracle > Reporter: Hans van den Bogert > Priority: Minor > > When running spark in coarse grained mode with shuffle service and dynamic > allocation, the driver does not release executors if a dataset is cached. > The console output OTOH shows: > > 15/08/26 17:29:58 WARN SparkContext: Dynamic allocation currently does not > > support cached RDDs. Cached data for RDD 9 will be lost when executors are > > removed. > However after the default of 1m, executors are not released. When I perform > the same initial setup, loading data, etc, but without caching, the executors > are released. > Is this intended behaviour? > If this is intended behaviour, the console warning is misleading. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org