Have you seen the following ?
http://stackoverflow.com/questions/27553547/xloggc-not-creating-log-file-if-path-doesnt-exist-for-the-first-time
On Sat, Jul 23, 2016 at 5:18 PM, Ascot Moss wrote:
> I tried to add -Xloggc:./jvm_gc.log
>
> --conf
I tried to add -Xloggc:./jvm_gc.log
--conf "spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -Xloggc:./jvm_gc.log -XX:+PrintGCDateStamps"
however, I could not find ./jvm_gc.log
How to resolve the OOM and gc log issue?
Regards
On Sun, Jul 24, 2016 at 6:37
My JDK is Java 1.8 u40
On Sun, Jul 24, 2016 at 3:45 AM, Ted Yu wrote:
> Since you specified +PrintGCDetails, you should be able to get some more
> detail from the GC log.
>
> Also, which JDK version are you using ?
>
> Please use Java 8 where G1GC is more reliable.
>
> On
Since you specified +PrintGCDetails, you should be able to get some more
detail from the GC log.
Also, which JDK version are you using ?
Please use Java 8 where G1GC is more reliable.
On Sat, Jul 23, 2016 at 10:38 AM, Ascot Moss wrote:
> Hi,
>
> I added the following
Hi,
I added the following parameter:
--conf "spark.executor.extraJavaOptions=-XX:+UseG1GC
-XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5
-XX:InitiatingHeapOccupancyPercent=70 -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps"
Still got Java heap space error.
Any idea to
I can see large number of collections happening on driver and eventually,
driver is running out of memory. ( am not sure whether you have persisted any
rdd or data frame). May be you would want to avoid doing so many collections or
persist unwanted data in memory.
To begin with, you may want
Hi
Please help!
When running random forest training phase in cluster mode, I got GC
overhead limit exceeded.
I have used two parameters when submitting the job to cluster
--driver-memory 64g \
--executor-memory 8g \
My Current settings:
(spark-defaults.conf)
spark.executor.memory