I am seeing skewed execution times. As far as I can tell, they are
attributable to differences in data locality - tasks with locality
PROCESS_LOCAL run fast, NODE_LOCAL, slower, and ANY, slowest.
This seems entirely as it should be - the question is, why the different
locality levels?
I am
You mentioned that the 3.1 min run was the one that did the actual caching,
so did that run before any data was cached, or after?
I would recommend checking the Storage tab of the UI, and clicking on the
RDD, to see both how full the executors' storage memory is (which may be
significantly less
The fact that the caching percentage went down is highly suspicious. It
should generally not decrease unless other cached data took its place, or
if unless executors were dying. Do you know if either of these were the
case?
On Tue, Nov 11, 2014 at 8:58 AM, Nathan Kronenfeld
Sorry, I think I was not clear in what I meant.
I didn't mean it went down within a run, with the same instance.
I meant I'd run the whole app, and one time, it would cache 100%, and the
next run, it might cache only 83%
Within a run, it doesn't change.
On Wed, Nov 12, 2014 at 11:31 PM, Aaron
Spark's scheduling is pretty simple: it will allocate tasks to open cores
on executors, preferring ones where the data is local. It even performs
delay scheduling, which means waiting a bit to see if an executor where
the data resides locally becomes available.
Are yours tasks seeing very skewed
Can anyone point me to a good primer on how spark decides where to send
what task, how it distributes them, and how it determines data locality?
I'm trying a pretty simple task - it's doing a foreach over cached data,
accumulating some (relatively complex) values.
So I see several