That looks like it's during recovery from a checkpoint, so it'd be driver
memory not executor memory.
How big is the checkpoint directory that you're trying to restore from?
On Mon, Aug 10, 2015 at 10:57 AM, Dmitry Goldenberg
dgoldenberg...@gmail.com wrote:
We're getting the below error.
I wonder during recovery from a checkpoint whether we can estimate the size
of the checkpoint and compare with Runtime.getRuntime().freeMemory().
If the size of checkpoint is much bigger than free memory, log warning, etc
Cheers
On Mon, Aug 10, 2015 at 9:34 AM, Dmitry Goldenberg
We're getting the below error. Tried increasing spark.executor.memory e.g.
from 1g to 2g but the below error still happens.
Any recommendations? Something to do with specifying -Xmx in the submit job
scripts?
Thanks.
Exception in thread main java.lang.OutOfMemoryError: GC overhead limit
Would there be a way to chunk up/batch up the contents of the checkpointing
directories as they're being processed by Spark Streaming? Is it mandatory
to load the whole thing in one go?
On Mon, Aug 10, 2015 at 12:42 PM, Ted Yu yuzhih...@gmail.com wrote:
I wonder during recovery from a
You need to keep a certain number of rdds around for checkpointing, based
on e.g. the window size. Those would all need to be loaded at once.
On Mon, Aug 10, 2015 at 11:49 AM, Dmitry Goldenberg
dgoldenberg...@gmail.com wrote:
Would there be a way to chunk up/batch up the contents of the
Looks like workaround is to reduce *window length.*
*Cheers*
On Mon, Aug 10, 2015 at 10:07 AM, Cody Koeninger c...@koeninger.org wrote:
You need to keep a certain number of rdds around for checkpointing, based
on e.g. the window size. Those would all need to be loaded at once.
On Mon, Aug
You need to keep a certain number of rdds around for checkpointing --
that seems like a hefty expense to pay in order to achieve fault
tolerance. Why does Spark persist whole RDD's of data? Shouldn't it be
sufficient to just persist the offsets, to know where to resume from?
Thanks.
On Mon,
Well, RDDs also contain data, don't they?
The question is, what can be so hefty in the checkpointing directory to
cause Spark driver to run out of memory? It seems that it makes
checkpointing expensive, in terms of I/O and memory consumption. Two
network hops -- to driver, then to workers.
The rdd is indeed defined by mostly just the offsets / topic partitions.
On Mon, Aug 10, 2015 at 3:24 PM, Dmitry Goldenberg dgoldenberg...@gmail.com
wrote:
You need to keep a certain number of rdds around for checkpointing --
that seems like a hefty expense to pay in order to achieve fault
No, it's not like a given KafkaRDD object contains an array of messages
that gets serialized with the object. Its compute method generates an
iterator of messages as needed, by connecting to kafka.
I don't know what was so hefty in your checkpoint directory, because you
deleted it. My
10 matches
Mail list logo