This looks like you are just running your own program. To run Spark
programs, you use spark-submit. It has options that control the
executor and driver memory. The settings below are not affecting
Spark.

On Wed, Oct 1, 2014 at 10:21 PM, 陈韵竹 <anny9...@gmail.com> wrote:
> Thanks Sean. This is how I set this memory. I set it when I start to run the
> job
>
> java -Xms64g -Xmx64g -cp
> /root/spark/lib/spark-assembly-1.0.0-hadoop1.0.4.jar:/root/scala/lib/scala-library.jar:./target/MyProject.jar
> MyClass
>
> Is there some problem with it?
>
>
>
> On Wed, Oct 1, 2014 at 2:03 PM, Sean Owen <so...@cloudera.com> wrote:
>>
>> How are you setting this memory? You may be configuring the wrong
>> process's memory, like the driver and not the executors.
>>
>> On Oct 1, 2014 9:37 PM, "anny9699" <anny9...@gmail.com> wrote:
>>>
>>> Hi,
>>>
>>> After reading some previous posts about this issue, I have increased the
>>> java heap space to "-Xms64g -Xmx64g", but still met the
>>> "java.lang.OutOfMemoryError: GC overhead limit exceeded" error. Does
>>> anyone
>>> have other suggestions?
>>>
>>> I am reading a data of 200 GB and my total memory is 120 GB, so I use
>>> "MEMORY_AND_DISK_SER" and kryo serialization.
>>>
>>> Thanks a lot!
>>>
>>>
>>>
>>> --
>>> View this message in context:
>>> http://apache-spark-user-list.1001560.n3.nabble.com/still-GC-overhead-limit-exceeded-after-increasing-heap-space-tp15540.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to