Hi,
*In a reduce operation I am trying to accumulate a list of SparseVectors.
The code is given below;*
val WNode = trainingData.reduce{(node1:Node,node2:Node) =
val wNode = new Node(num1,num2)
wNode.WhatList ++= (node1.WList)
Hi,
I am also facing the same problem. Has any one found out the solution yet?
It just returns a vague set of characters.
Please help..
Exception in thread main org.apache.spark.SparkException: Job aborted due
to stage failure: Exception while deserializing and fetching task:
Hi,
I am using spark-scala system to train distributed svm. For training svm I
am using the files in LIBSVM format. I want to partition a file into fixed
number of partititions, with each partition having equal number
of datapoints(assume that the number of datapoints in the file is exactly