Have you seen the following ?
http://stackoverflow.com/questions/27553547/xloggc-not-creating-log-file-if-path-doesnt-exist-for-the-first-time
On Sat, Jul 23, 2016 at 5:18 PM, Ascot Moss wrote:
> I tried to add -Xloggc:./jvm_gc.log
>
> --conf
I tried to add -Xloggc:./jvm_gc.log
--conf "spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps -Xloggc:./jvm_gc.log -XX:+PrintGCDateStamps"
however, I could not find ./jvm_gc.log
How to resolve the OOM and gc log issue?
Regards
On Sun, Jul 24, 2016 at 6:37
My JDK is Java 1.8 u40
On Sun, Jul 24, 2016 at 3:45 AM, Ted Yu wrote:
> Since you specified +PrintGCDetails, you should be able to get some more
> detail from the GC log.
>
> Also, which JDK version are you using ?
>
> Please use Java 8 where G1GC is more reliable.
>
> On
Since you specified +PrintGCDetails, you should be able to get some more
detail from the GC log.
Also, which JDK version are you using ?
Please use Java 8 where G1GC is more reliable.
On Sat, Jul 23, 2016 at 10:38 AM, Ascot Moss wrote:
> Hi,
>
> I added the following
Hi,
I added the following parameter:
--conf "spark.executor.extraJavaOptions=-XX:+UseG1GC
-XX:MaxGCPauseMillis=200 -XX:ParallelGCThreads=20 -XX:ConcGCThreads=5
-XX:InitiatingHeapOccupancyPercent=70 -XX:+PrintGCDetails
-XX:+PrintGCTimeStamps"
Still got Java heap space error.
Any idea to
I can see large number of collections happening on driver and eventually,
driver is running out of memory. ( am not sure whether you have persisted any
rdd or data frame). May be you would want to avoid doing so many collections or
persist unwanted data in memory.
To begin with, you may want
Hi
Please help!
When running random forest training phase in cluster mode, I got GC
overhead limit exceeded.
I have used two parameters when submitting the job to cluster
--driver-memory 64g \
--executor-memory 8g \
My Current settings:
(spark-defaults.conf)
spark.executor.memory
s [mailto:ak...@hacked.work]
> *Sent:* 01 July 2016 11:38
> *To:* Joaquin Alzola <joaquin.alz...@lebara.com>
> *Cc:* user@spark.apache.org
> *Subject:* Re: Remote RPC client disassociated
>
>
>
> This looks like a version conflict, which version of spark are you us
HI Akhil
I am using:
Cassandra: 3.0.5
Spark: 1.6.1
Scala 2.10
Spark-cassandra connector: 1.6.0
From: Akhil Das [mailto:ak...@hacked.work]
Sent: 01 July 2016 11:38
To: Joaquin Alzola <joaquin.alz...@lebara.com>
Cc: user@spark.apache.org
Subject: Re: Remote RPC client disassociated
This
unt()
>
> food_count.collect()
>
>
>
>
>
> Error I get when running the above command:
>
>
>
> [Stage 0:> (0 +
> 3) / 7]16/06/30 10:40:36 ERROR TaskSchedulerImpl: Lost executor 0 on as5:
> Remote R
>>> 16/06/30 10:44:34 ERROR util.Utils: Uncaught exception in thread stdout
>>> writer for python
java.lang.AbstractMethodError:
pyspark_cassandra.DeferringRowReader.read(Lcom/datastax/driver/core/Row;Lcom/datastax/spark/connector/CassandraRowMetadata;)Ljava/lang/Object;
>> You are trying to
ror I get when running the above command:
>
>
>
> [Stage 0:> (0 +
> 3) / 7]16/06/30 10:40:36 ERROR TaskSchedulerImpl: Lost executor 0 on as5:
> Remote RPC client disassociated. Likely due to containers exceeding
> thresholds,
(0 + 3) /
7]16/06/30 10:40:36 ERROR TaskSchedulerImpl: Lost executor 0 on as5: Remote RPC
client disassociated. Likely due to containers exceeding thresholds, or network
issues. Check driver logs for WARN messages.
[Stage 0:>
all on one 16-core machine) because "remote RPC client disassociated".
Below is the full error message. I would appreciate any pointer on debugging
this problem. Thanks!
16/06/29 12:43:50 WARN TaskSetManager: Lost task 7.0 in stage 2581.0 (TID
9304, no139.nome.nx): TaskKilled (killed int
14 matches
Mail list logo