Hi devs:
I think it's unnecessary to use c1._1 += c2.1 in combOp operation, I
think it's the same if we use c1._1+c2._1, see the code below :
in GradientDescent.scala
val (gradientSum, lossSum, miniBatchSize) = data.sample(false,
miniBatchFraction, 42 + i)
.treeAggregate((BDV.zeros[Double](n), 0.0, 0L))(
seqOp = (c, v) => {
// c: (grad, loss, count), v: (label, features)
// c._1 即 grad will be updated in gradient.compute
val l = gradient.compute(v._2, v._1, bcWeights.value,
Vectors.fromBreeze(c._1))
(c._1, c._2 + l, c._3 + 1)
},
combOp = (c1, c2) => {
// c: (grad, loss, count)
(c1._1 += c2._1, c1._2 + c2._2, c1._3 + c2._3)
})
--
View this message in context:
http://apache-spark-developers-list.1001551.n3.nabble.com/Question-about-spark-mllib-GradientDescent-tp20052.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe e-mail: [email protected]