You could try setting the spark.akka.frameSize while creating the
sparkContext, but its strange that the message it shows is saying your
master is dead, usually its the other way, executor dies. Can you also
explain the behavior of your application (what exactly you are doing over
the 8Gb data)?

Eg:

    val conf = new SparkConf()
>       .setMaster("spark://master1:7077")
>       .setAppName("MyApp")
>       .set("spark.executor.memory", "20g")
>       .set("spark.rdd.compress","true")
>       .set("spark.storage.memoryFraction","1")
>
> *.set("spark.core.connection.ack.wait.timeout","6000")**
> .set("spark.akka.frameSize","100")*
>     val sc = new SparkContext(conf)


Thanks
Best Regards

On Tue, Oct 21, 2014 at 7:32 AM, randylu <randyl...@gmail.com> wrote:

>   The cluster also runs other applcations every hour as normal, so the
> master
> is always running. No matter what the cores i use or the quantity of
> input-data(but big enough), the application just fail at 1.1 hours later.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/why-does-driver-connects-to-master-fail-tp16758p16875.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>

Reply via email to