Data Analytics with Spark
>>> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
>>>
>>>
>>>
>>> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
>>> *Sent:* Wednesday, February 3, 2016 11:31 AM
>>> *To:*
with Spark
> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
>
>
>
> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
> *Sent:* Wednesday, February 3, 2016 11:31 AM
> *To:* Stefan Panayotov
> *Cc:* Jim Green; Ted Yu; Jakob Oders
with Spark
>>> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/>
>>>
>>>
>>>
>>> *From:* Nirav Patel [mailto:npa...@xactlycorp.com]
>>> *Sent:* Wednesday, February 3, 2016 11:31 AM
>>> *To:* Stefan P
656/>
From: Nirav Patel [mailto:npa...@xactlycorp.com]
Sent: Wednesday, February 3, 2016 11:31 AM
To: Stefan Panayotov
Cc: Jim Green; Ted Yu; Jakob Odersky; user@spark.apache.org
Subject: Re: Spark 1.5.2 memory error
Hi Stefan,
Welcome to the OOM - heap space club. I have been struggling with s
orp.com]
>> *Sent:* Wednesday, February 3, 2016 11:31 AM
>> *To:* Stefan Panayotov
>> *Cc:* Jim Green; Ted Yu; Jakob Odersky; user@spark.apache.org
>>
>> *Subject:* Re: Spark 1.5.2 memory error
>>
>>
>>
>> Hi Stefan,
>>
>>
>>
>
actitioners/dp/1484209656/>
>
>
> From: Nirav Patel [mailto:npa...@xactlycorp.com
> <mailto:npa...@xactlycorp.com>]
> Sent: Wednesday, February 3, 2016 11:31 AM
> To: Stefan Panayotov
> Cc: Jim Green; Ted Yu; Jakob Odersky; user@spark.apache.org
> <mailto:use
M, Mohammed Guller <moham...@glassbeam.com
>>>> > wrote:
>>>>
>>>>> Nirav,
>>>>>
>>>>> Sorry to hear about your experience with Spark; however, sucks is a
>>>>> very strong word. Many organizations are process
ore than 150GB
>>>> of data with Spark.
>>>>
>>>>
>>>>
>>>> Mohammed
>>>>
>>>> Author: Big Data Analytics with Spark
>>>> <http://www.amazon.com/Big-Data-Analytics-Spark-Practitioners/dp/1484209656/&
mpl.java:removeOrTrackCompletedContainersFromContext(529))
> - Removed completed containers from NM context:
> [container_1454509557526_0014_01_93]
>
> I'll appreciate any suggestions.
>
> Thanks,
>
>
> *Stefan Panayotov, PhD **Home*: 610-355-0919
> *Cell*: 610-517-5586
> *email
: Tuesday, February 2, 2016 4:52 PM
To: Jakob Odersky
Cc: Stefan Panayotov; user@spark.apache.org
Subject: Re: Spark 1.5.2 memory error What value do you use for
spark.yarn.executor.memoryOverhead ? Please see
https://spark.apache.org/docs/latest/running-on-yarn.html for description of
the
gt; - Removed completed containers from NM context:
> [container_1454509557526_0014_01_93]
>
> I'll appreciate any suggestions.
>
> Thanks,
>
> Stefan Panayotov, PhD
> Home: 610-355-0919
> Cell: 610-517-5586
> email: spanayo...@msn.com <mailto:spanayo...@
Can you share some code that produces the error? It is probably not
due to spark but rather the way data is handled in the user code.
Does your code call any reduceByKey actions? These are often a source
for OOM errors.
On Tue, Feb 2, 2016 at 1:22 PM, Stefan Panayotov wrote:
For the memoryOvethead I have the default of 10% of 16g, and Spark version is
1.5.2.
Stefan Panayotov, PhD
Sent from Outlook Mail for Windows 10 phone
From: Ted Yu
Sent: Tuesday, February 2, 2016 4:52 PM
To: Jakob Odersky
Cc: Stefan Panayotov; user@spark.apache.org
Subject: Re: Spark 1.5.2
What value do you use for spark.yarn.executor.memoryOverhead ?
Please see https://spark.apache.org/docs/latest/running-on-yarn.html for
description of the parameter.
Which Spark release are you using ?
Cheers
On Tue, Feb 2, 2016 at 1:38 PM, Jakob Odersky wrote:
> Can you
*To: *Jakob Odersky <ja...@odersky.com>
> *Cc: *Stefan Panayotov <spanayo...@msn.com>; user@spark.apache.org
> *Subject: *Re: Spark 1.5.2 memory error
>
>
>
> What value do you use for spark.yarn.executor.memoryOverhead ?
>
>
>
> Please see https://spark.apa
15 matches
Mail list logo