i tried to connect cassandra via spark-cassandra-conenctor2.0.0 on pyspark but
i get the error below
i think it s related to pyspark/context.py but i dont know how?
SparkDeploySchedulerBackend: Application
has been killed. Reason: All masters are unresponsive! Giving up.
…
Looking into the master logs I find:
15/08/06 22:52:28 INFO Master: akka.tcp://sparkDriver@192.168.137.41:48877
got disassociated, removing it.
15/08/06 22:52:46 ERROR Remoting
...@gmail.com]
*Sent:* Thursday, August 6, 2015 11:22 PM
*To:* Jeff Jones
*Cc:* user@spark.apache.org
*Subject:* Re: All masters are unresponsive! Giving up.
There seems to be a version mismatch somewhere. You can try and find out
the cause with debug serialization information. I think the jvm flag
@spark.apache.org
*Subject:* Re: All masters are unresponsive! Giving up.
There seems to be a version mismatch somewhere. You can try and find out
the cause with debug serialization information. I think the jvm flag
-Dsun.io.*serialization*.*extendedDebugInfo*=true should help.
Best Regards
, 2015 11:22 PM
To: Jeff Jones
Cc: user@spark.apache.org
Subject: Re: All masters are unresponsive! Giving up.
There seems to be a version mismatch somewhere. You can try and find out the
cause with debug serialization information. I think the jvm flag
-Dsun.io.serialization.extendedDebugInfo=true
memory15/02/11 12:22:46
ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All
masters are unresponsive! Giving up.15/02/11 12:22:46 INFO
TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed,
from pool15/02/11 12:22:46 INFO TaskSchedulerImpl: Cancelling stage
015/02
WARN TaskSchedulerImpl:
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient memory15/02/11 12:22:46
ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All
masters are unresponsive! Giving up.15/02/11 12