Re: Spark Job Hangs on our production cluster

2015-08-18 Thread Imran Rashid
just looking at the thread dump from your original email, the 3 executor threads are all trying to load classes. (One thread is actually loading some class, and the others are blocked waiting to load a class, most likely trying to load the same thing.) That is really weird, definitely not

RE: Spark Job Hangs on our production cluster

2015-08-18 Thread java8964
, the executor already scan the first block data from HDFS, and hanging while starting the 2nd block. All the class should be already loaded in JVM in this case. Thanks Yong From: iras...@cloudera.com Date: Tue, 18 Aug 2015 12:17:56 -0500 Subject: Re: Spark Job Hangs on our production cluster

Re: Spark Job Hangs on our production cluster

2015-08-18 Thread Imran Rashid
, and hanging while starting the 2nd block. All the class should be already loaded in JVM in this case. Thanks Yong -- From: iras...@cloudera.com Date: Tue, 18 Aug 2015 12:17:56 -0500 Subject: Re: Spark Job Hangs on our production cluster To: java8...@hotmail.com CC

Spark Job Hangs on our production cluster

2015-08-17 Thread java8964
To: user@spark.apache.org Subject: RE: Spark Job Hangs on our production cluster Date: Fri, 14 Aug 2015 15:14:10 -0400 I still want to check if anyone can provide any help related to the Spark 1.2.2 will hang on our production cluster when reading Big HDFS data (7800 avro blocks), while looks

RE: Spark Job Hangs on our production cluster

2015-08-14 Thread java8964
to generate that. But I am not sure if AVRO format could be the cause. Thanks for your help. Yong From: java8...@hotmail.com To: user@spark.apache.org Subject: Spark Job Hangs on our production cluster Date: Tue, 11 Aug 2015 16:19:05 -0400 Currently we have a IBM BigInsight cluster with 1 namenode + 1

Spark Job Hangs on our production cluster

2015-08-11 Thread java8964
Currently we have a IBM BigInsight cluster with 1 namenode + 1 JobTracker + 42 data/task nodes, which runs with BigInsight V3.0.0.2, corresponding with Hadoop 2.2.0 with MR1. Since IBM BigInsight doesn't come with Spark, so we build Spark 1.2.2 with Hadoop 2.2.0 + Hive 0.12 by ourselves, and

Re: Spark Job Hangs on our production cluster

2015-08-11 Thread Jeff Zhang
the Spark UI the executor heap is set as 24G. Thanks Yong -- From: igor.ber...@gmail.com Date: Tue, 11 Aug 2015 23:31:59 +0300 Subject: Re: Spark Job Hangs on our production cluster To: java8...@hotmail.com CC: user@spark.apache.org how do u want to process 1T