That's great insight Mark, I'm looking forward to give it a try!! According to jira's Adaptive execution in Spark <https://issues.apache.org/jira/browse/SPARK-9850> , it seems that some functionality was added in Spark 1.6.0 and the rest is still in progress. Are there any improvements to the SparkSQL adaptive behavior in Spark 2.0+ that you know?
Thanks and best regards, Leo -- View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Spark-SQL-parameters-like-shuffle-partitions-should-be-stored-in-the-lineage-tp13240p19885.html Sent from the Apache Spark Developers List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe e-mail: dev-unsubscr...@spark.apache.org