The exit code 52 comes from org.apache.spark.util.SparkExitCode, and it is
val OOM=52 - i.e. an OutOfMemoryError
Refer
https://github.com/apache/spark/blob/d6dc12ef0146ae409834c78737c116050961f350/core/src/main/scala/org/apache/spark/util/SparkExitCode.scala
On 19 September 2016 at 14:57,
My job is 1TB join + 10 GB table on spark1.6.1
run on yarn mode:
*1. if I open shuffle service, the error is *
Job aborted due to stage failure: ShuffleMapStage 2 (writeToDirectory at
NativeMethodAccessorImpl.java:-2) has failed the maximum allowable number
of times: 4. Most recent failure
Try increasing memory (--conf spark.executor.memory=3g or
--executor-memory) for executors. Here is something I noted from your logs
15/09/29 06:32:03 WARN MemoryStore: Failed to reserve initial memory
threshold of 1024.0 KB for computing block rdd_2_1813 in memory.
15/09/29 06:32:03 WARN
Can you list the spark-submit command line you used ?
Thanks
On Tue, Sep 29, 2015 at 9:02 AM, Anup Sawant
wrote:
> Hi all,
> Any idea why I am getting 'Executor heartbeat timed out' ? I am fairly new
> to Spark so I have less knowledge about the internals of it. The
Hi all,
Any idea why I am getting 'Executor heartbeat timed out' ? I am fairly new
to Spark so I have less knowledge about the internals of it. The job was
running for a day or so on 102 Gb of data with 40 workers.
-Best,
Anup.
15/09/29 06:32:03 ERROR TaskSchedulerImpl: Lost executor driver on
>
> I am using foreachRDD in my code as -
>
> dstream.foreachRDD { rdd => rdd.foreach { record => // look up with
> cassandra table
> // save updated rows to cassandra table.
> }
> }
> This foreachRDD is causing executor lost failure. what is the behavior of
> this foreachRDD ???
>
> Thanks,
> Padma Ch
>
Hello All,
I am using foreachRDD in my code as -
dstream.foreachRDD { rdd => rdd.foreach { record => // look up with
cassandra table
// save updated rows to cassandra table.
}
}
This foreachRDD is causing executor lost failure. what is the behavior of
this foreachRDD ???
Thanks,
Padma Ch
spark streaming application which writes the processed results
> to cassandra. In local mode, the code seems to work fine. The moment i
> start running in distributed mode using yarn, i see executor lost failure.
> I increased executor memory to occupy entire node's memory which is arou
Hi All,
I have a spark streaming application which writes the processed results to
cassandra. In local mode, the code seems to work fine. The moment i start
running in distributed mode using yarn, i see executor lost failure. I
increased executor memory to occupy entire node's memory which
Yes... found the output on web UI of the slave.
Thanks :)
On Tue, Nov 11, 2014 at 2:48 AM, Ankur Dave ankurd...@gmail.com wrote:
At 2014-11-10 22:53:49 +0530, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
Tasks are now getting submitted, but many tasks don't happen.
Like, after
On Mon, Nov 10, 2014 at 10:52 PM, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
Tasks are now getting submitted, but many tasks don't happen.
Like, after opening the spark-shell, I load a text file from disk and try
printing its contentsas:
-- Forwarded message --
From: Ritesh Kumar Singh riteshoneinamill...@gmail.com
Date: Mon, Nov 10, 2014 at 10:52 PM
Subject: Re: Executor Lost Failure
To: Akhil Das ak...@sigmoidanalytics.com
Tasks are now getting submitted, but many tasks don't happen.
Like, after opening
At 2014-11-10 22:53:49 +0530, Ritesh Kumar Singh
riteshoneinamill...@gmail.com wrote:
Tasks are now getting submitted, but many tasks don't happen.
Like, after opening the spark-shell, I load a text file from disk and try
printing its contentsas:
sc.textFile(/path/to/file).foreach(println)
13 matches
Mail list logo