anybody met this high availability problem with zookeeper?
2014-09-12 10:34 GMT+08:00 jason chen :
> Hi guys,
>
> I configured Spark with the configuration in spark-env.sh:
> export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER
> -Dspark.deploy.zookeeper.
Hi guys,
I configured Spark with the configuration in spark-env.sh:
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=host1:2181,host2:2181,host3:2181
-Dspark.deploy.zookeeper.dir=/spark"
And I started spark-shell on one master host1(active):
MASTER
I checked javacore file, there is:
Dump Event "systhrow" (0004) Detail "java/lang/OutOfMemoryError" "Java
heap space" received
After checking the failure thread, I found it occur in
SparkFlumeEvent.readExternal() method:
71 for (i <- 0 until numHeaders) {
72 val keyLength = in.rea