just looking at the thread dump from your original email, the 3 executor
threads are all trying to load classes. (One thread is actually loading
some class, and the others are blocked waiting to load a class, most likely
trying to load the same thing.) That is really weird, definitely not
, the executor already scan the first block data from HDFS, and
hanging while starting the 2nd block. All the class should be already loaded in
JVM in this case.
Thanks
Yong
From: iras...@cloudera.com
Date: Tue, 18 Aug 2015 12:17:56 -0500
Subject: Re: Spark Job Hangs on our production cluster
, and hanging while starting the 2nd block. All the class should be
already loaded in JVM in this case.
Thanks
Yong
--
From: iras...@cloudera.com
Date: Tue, 18 Aug 2015 12:17:56 -0500
Subject: Re: Spark Job Hangs on our production cluster
To: java8...@hotmail.com
CC
I still want to check if anyone can provide any help related to the Spark 1.2.2
will hang on our production cluster when reading Big HDFS data (7800 avro
blocks), while looks fine for small data (769 avro blocks).
I enable the debug level in the spark log4j, and attached the log file if it
the Spark UI the executor heap is set as 24G.
Thanks
Yong
--
From: igor.ber...@gmail.com
Date: Tue, 11 Aug 2015 23:31:59 +0300
Subject: Re: Spark Job Hangs on our production cluster
To: java8...@hotmail.com
CC: user@spark.apache.org
how do u want to process 1T