Hi Liquan,

I have 8 workers, each with 15.7GB memory.

What you said makes sense, but if I don't increase heap space, it keeps
telling me "GC overhead limit exceeded".

Thanks!
Anny

On Wed, Oct 1, 2014 at 1:41 PM, Liquan Pei [via Apache Spark User List] <
ml-node+s1001560n1554...@n3.nabble.com> wrote:

> Hi
>
> How many nodes in your cluster? It seems to me 64g does not help if each
> of your node doesn't have that many memory.
>
> Liquan
>
> On Wed, Oct 1, 2014 at 1:37 PM, anny9699 <[hidden email]
> <http://user/SendEmail.jtp?type=node&node=15541&i=0>> wrote:
>
>> Hi,
>>
>> After reading some previous posts about this issue, I have increased the
>> java heap space to "-Xms64g -Xmx64g", but still met the
>> "java.lang.OutOfMemoryError: GC overhead limit exceeded" error. Does
>> anyone
>> have other suggestions?
>>
>> I am reading a data of 200 GB and my total memory is 120 GB, so I use
>> "MEMORY_AND_DISK_SER" and kryo serialization.
>>
>> Thanks a lot!
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/still-GC-overhead-limit-exceeded-after-increasing-heap-space-tp15540.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> <http://user/SendEmail.jtp?type=node&node=15541&i=1>
>> For additional commands, e-mail: [hidden email]
>> <http://user/SendEmail.jtp?type=node&node=15541&i=2>
>>
>>
>
>
> --
> Liquan Pei
> Department of Physics
> University of Massachusetts Amherst
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://apache-spark-user-list.1001560.n3.nabble.com/still-GC-overhead-limit-exceeded-after-increasing-heap-space-tp15540p15541.html
>  To unsubscribe from still "GC overhead limit exceeded" after increasing
> heap space, click here
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=15540&code=YW5ueTk2OTlAZ21haWwuY29tfDE1NTQwfC0xMzE2OTg2NzMw>
> .
> NAML
> <http://apache-spark-user-list.1001560.n3.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/still-GC-overhead-limit-exceeded-after-increasing-heap-space-tp15540p15543.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to