Hi,
I am running a simple job on Spark 1.6 in which I am trying to leftOuterJoin a
big RDD with a smaller one. I am not yet broadcasting the smaller RDD yet
but I am stilling running into FetchFailed errors with finally the job
getting killed.
I have already partitioned the data to 5000
Hi,
I has this problem before, and in my case it is because the
executor/container was killed by yarn when it used more memory than
allocated. You can check if your case is the same by checking yarn node
manager log.
Best,
Patcharee
On 05. juni 2015 07:25, ÐΞ€ρ@Ҝ (๏̯͡๏) wrote:
I see this
I see this
Is this a problem with my code or the cluster ? Is there any way to fix it ?
FetchFailed(BlockManagerId(2, phxdpehdc9dn2441.stratus.phx.ebay.com,
59574), shuffleId=1, mapId=80, reduceId=20, message=
org.apache.spark.shuffle.FetchFailedException: Failed to connect to