ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.

2016-09-23 Thread muhammet pakyürek
i tried to connect cassandra via spark-cassandra-conenctor2.0.0 on pyspark but i get the error below i think it s related to pyspark/context.py but i dont know how?

Re: All masters are unresponsive! Giving up.

2015-08-07 Thread Sonal Goyal
SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up. … Looking into the master logs I find: 15/08/06 22:52:28 INFO Master: akka.tcp://sparkDriver@192.168.137.41:48877 got disassociated, removing it. 15/08/06 22:52:46 ERROR Remoting

Re: All masters are unresponsive! Giving up.

2015-08-07 Thread Ted Yu
...@gmail.com] *Sent:* Thursday, August 6, 2015 11:22 PM *To:* Jeff Jones *Cc:* user@spark.apache.org *Subject:* Re: All masters are unresponsive! Giving up. There seems to be a version mismatch somewhere. You can try and find out the cause with debug serialization information. I think the jvm flag

Re: All masters are unresponsive! Giving up.

2015-08-07 Thread Igor Berman
@spark.apache.org *Subject:* Re: All masters are unresponsive! Giving up. There seems to be a version mismatch somewhere. You can try and find out the cause with debug serialization information. I think the jvm flag -Dsun.io.*serialization*.*extendedDebugInfo*=true should help. Best Regards

RE: All masters are unresponsive! Giving up.

2015-08-07 Thread Jeff Jones
, 2015 11:22 PM To: Jeff Jones Cc: user@spark.apache.org Subject: Re: All masters are unresponsive! Giving up. There seems to be a version mismatch somewhere. You can try and find out the cause with debug serialization information. I think the jvm flag -Dsun.io.serialization.extendedDebugInfo=true

org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

2015-02-10 Thread lakewood
memory15/02/11 12:22:46 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.15/02/11 12:22:46 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool15/02/11 12:22:46 INFO TaskSchedulerImpl: Cancelling stage 015/02

Re: org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.

2015-02-10 Thread Akhil Das
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory15/02/11 12:22:46 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.15/02/11 12