On 4 Jun 2015, at 15:59, Chao Chen kandy...@gmail.com wrote:
But when I try to run the Pagerank from HiBench, it always cause a node to
reboot during the middle of the work for all scala, java, and python
versions. But works fine
with the MapReduce version from the same benchmark.
do
vm.swappiness=0? Some vendors recommend this set to 0 (zero), although I've
seen this causes even kernel to fail to allocate memory.
It may cause node reboot. If that's the case, set vm.swappiness to 5-10 and
decrease spark.*.memory. Your spark.driver.memory+
spark.executor.memory + OS + etc
Hi all,
I am new to spark. I am trying to deploy HDFS (hadoop-2.6.0) and Spark-1.3.1
with four nodes, and each node has 8-cores and 8GB memory.
One is configured as headnode running masters, and 3 others are workers
But when I try to run the Pagerank from HiBench, it always cause a node to