I think it does because user doesn't exactly see their application logic
and flow as spark internal does. Off course we follow general guidelines
for performance but we shouldn't care really how exactly spark decide to
execute DAG. Spark scheduler or core can keep changing over time to
optimize
But when you talk about optimizing the DAG, it really doesn't make sense to
also talk about transformation steps as separate entities. The
DAGScheduler knows about Jobs, Stages, TaskSets and Tasks. The
TaskScheduler knows about TaskSets ad Tasks. Neither of them understands
the transformation
Hi Mark,
I might have said stage instead of step in my last statement "UI just says
Collect failed but in fact it could be any stage in that lazy chain of
evaluation."
Anyways even you agree that this visibility of underlaying steps wont't be
available. which does pose difficulties in terms of
You appear to be misunderstanding the nature of a Stage. Individual
transformation steps such as `map` do not define the boundaries of Stages.
Rather, a sequence of transformations in which there is only a
NarrowDependency between each of the transformations will be pipelined into
a single Stage.
It's great that spark scheduler does optimized DAG processing and only does
lazy eval when some action is performed or shuffle dependency is
encountered. Sometime it goes further after shuffle dep before executing
anything. e.g. if there are map steps after shuffle then it doesn't stop at
shuffle