Spill-overs are a common issue for in-memory computing systems, after all
memory is limited. In Spark where RDDs are immutable, if an RDD got created
with its size > 1/2 node's RAM then a transformation and generation of the
consequent RDD' can potentially fill all the node's memory that can  cause
the spill-over into swap space.

Dr Mich Talebzadeh



LinkedIn * 
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
<https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw>*



http://talebzadehmich.wordpress.com



On 13 May 2016 at 00:38, Takeshi Yamamuro <linguin....@gmail.com> wrote:

> Hi,
>
> Which version of Spark you use?
> The recent one cannot handle this kind of spilling, see:
> http://spark.apache.org/docs/latest/tuning.html#memory-management-overview
> .
>
> // maropu
>
> On Fri, May 13, 2016 at 8:07 AM, Ashok Kumar <ashok34...@yahoo.com.invalid
> > wrote:
>
>> Hi,
>>
>> How one can avoid having Spark spill over after filling the node's memory.
>>
>> Thanks
>>
>>
>>
>>
>
>
> --
> ---
> Takeshi Yamamuro
>

Reply via email to