Are you getting OutOfMemory on the driver or on the executor? Typical cause
of OOM in Spark can be due to fewer number of tasks for a job.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMemory-Error-in-local-mode-tp29081p29117.html
Sent
t;> Am 22.08.2017 um 20:16 schrieb shitijkuls <kulshreshth...@gmail.com>:
>>
>> Any help here will be appreciated.
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMemor
freundlichen Grüßen / best regards
Kay-Uwe Moosheimer
> Am 22.08.2017 um 20:16 schrieb shitijkuls <kulshreshth...@gmail.com>:
>
> Any help here will be appreciated.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-
Any help here will be appreciated.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-submit-OutOfMemory-Error-in-local-mode-tp29081p29096.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
Hi,
When I execute the Spark ML Logisitc Regression example in pyspark I run
into an OutOfMemory exception. I'm wondering if any of you experienced the
same or has a hint about how to fix this.
The interesting bit is that I only get the exception when I try to write
the result DataFrame into a
Aaand, the error! :)
Exception in thread "org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception: java.lang.OutOfMemoryError thrown from the
UncaughtExceptionHandler in thread
"org.apache.hadoop.hdfs.PeerCache@4e000abf"
Exception in thread "Thread-7"
Exception: java.lang.OutOfMemoryError thrown
Hey, I'd try to debug, profile ResolvedDataSource. As far as I know, your
write will be performed by the JVM.
On Mon, Sep 7, 2015 at 4:11 PM Tóth Zoltán wrote:
> Unfortunately I'm getting the same error:
> The other interesting things are that:
> - the parquet files got
Hi,
Can you try to using save method instead of write?
ex: out_df.save("path","parquet")
b0c1
--
Skype: boci13, Hangout: boci.b...@gmail.com
On Mon, Sep 7, 2015 at
Unfortunately I'm getting the same error:
The other interesting things are that:
- the parquet files got actually written to HDFS (also with
.write.parquet() )
- the application gets stuck in the RUNNING state for good even after the
error is thrown
15/09/07 10:01:10 INFO spark.ContextCleaner:
Hi,
I ran your example on Spark-1.4.1 and 1.5.0-rc3. It succeeds on 1.4.1 but
throws the OOM on 1.5.0. Do any of you know which PR introduced this
issue?
Zsolt
2015-09-07 16:33 GMT+02:00 Zoltán Zvara :
> Hey, I'd try to debug, profile ResolvedDataSource. As far as I
updateStateBykey based on the received message type and finally
stores
into redis.
After running for few seconds the executor process get killed by
throwing OutOfMemory error.
The code snippet is below:
*NoOfReceiverInstances = 1*
*val kafkaStreams = (1 to NoOfReceiverInstances).map
on the received message type and finally
stores
into redis.
After running for few seconds the executor process get killed by
throwing OutOfMemory error.
The code snippet is below:
*NoOfReceiverInstances = 1*
*val kafkaStreams = (1 to NoOfReceiverInstances).map
the executor process get killed by
throwing OutOfMemory error.
The code snippet is below:
*NoOfReceiverInstances = 1*
*val kafkaStreams = (1 to NoOfReceiverInstances).map(*
* _ = KafkaUtils.createStream(ssc, ZKQuorum, ConsumerGroup,
TopicsMap)*
*)*
*val updateFunc = (values: Seq
and finally stores
into redis.
After running for few seconds the executor process get killed by
throwing OutOfMemory error.
The code snippet is below:
*NoOfReceiverInstances = 1*
*val kafkaStreams = (1 to NoOfReceiverInstances).map(*
* _ = KafkaUtils.createStream(ssc, ZKQuorum, ConsumerGroup
Hi,
We are building a spark streaming application which reads from kafka, does
updateStateBykey based on the received message type and finally stores into
redis.
After running for few seconds the executor process get killed by throwing
OutOfMemory error.
The code snippet is below
and finally stores into
redis.
After running for few seconds the executor process get killed by throwing
OutOfMemory error.
The code snippet is below:
*NoOfReceiverInstances = 1*
*val kafkaStreams = (1 to NoOfReceiverInstances).map(*
* _ = KafkaUtils.createStream(ssc, ZKQuorum
the app crashes on a lost
executor which itself failed due to a OutOfMemory error as below. This looks
almost identical to https://issues.apache.org/jira/browse/SPARK-4885 even
though we are seeing this error in Spark 1.1
2015-01-15 20:12:51,653 [handle-message-executor-13] ERROR
(including partitioning in
reduceByKey) and
4. joining a couple of MySQL tables using JdbcRdd
Of late, we are seeing major instabilities where the app crashes on a lost
executor which itself failed due to a OutOfMemory error as below. This
looks
almost identical to https://issues.apache.org
machine learning algorithms on Spark. I am working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
intermediate result from a transformation
/configuration.html
Thanks
Jerry
From: MEETHU MATHEW [mailto:meethu2...@yahoo.co.in]
Sent: Wednesday, August 20, 2014 4:48 PM
To: Akhil Das; Ghousia
Cc: user@spark.apache.org
Subject: Re: OutOfMemory Error
Hi ,
How to increase the heap size?
What is the difference between spark executor memory and heap
to a new huge value, resulting in OutOfMemory Error.
On Mon, Aug 18, 2014 at 12:34 PM, Akhil Das ak...@sigmoidanalytics.com
wrote:
I believe spark.shuffle.memoryFraction is the one you are looking for.
spark.shuffle.memoryFraction : Fraction of Java heap to use for
aggregation and cogroups
node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
intermediate result from a transformation is huge and this results in
OutOfMemory Error
to implement machine learning algorithms on Spark. I am
working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
intermediate result
trying to implement machine learning algorithms on Spark. I am
working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high, the
intermediate
Hi,
I am trying to implement machine learning algorithms on Spark. I am working
on a 3 node cluster, with each node having 5GB of memory. Whenever I am
working with slightly more number of records, I end up with OutOfMemory
Error. Problem is, even if number of records is slightly high
25 matches
Mail list logo