Hi Lingling, I think you don't properly subscribe to mailing list yet, so I +cc to the mailing list.
The mllib package is deprecated, and we no longer maintain it. The reason why it designed in this way is because of backward compatibility. In the original design, updater also has the logic of step size, and in LBFGS, we don't use it. In the code, we have documentation the math, and why this works. /** * It will return the gradient part of regularization using updater. * * Given the input parameters, the updater basically does the following, * * w' = w - thisIterStepSize * (gradient + regGradient(w)) * Note that regGradient is function of w * * If we set gradient = 0, thisIterStepSize = 1, then * * regGradient(w) = w - w' * * TODO: We need to clean it up by separating the logic of regularization out * from updater to regularizer. */ // The following gradientTotal is actually the regularization part of gradient. // Will add the gradientSum computed from the data with weights in the next step. Sincerely, DB Tsai ---------------------------------------------------------- Web: https://www.dbtsai.com PGP Key ID: 0xAF08DF8D >> On Wed, Aug 24, 2016 at 7:16 AM Lingling Li wrote: >>> >>> Hi! >>> >>> Sorry for getting in touch. This is Ling Ling and I am now reading the >>> LBFGS code in Spark. >>> >>> https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/mllib/optimization/LBFGS.scala >>> >>> I find that you are one of the contributors, so may be you can help me >>> out here? I appreciate it! >>> >>> In the CostFun: >>> val regVal = updater.compute(w, Vectors.zeros(n), 0, 1, regParam)._2 >>> axpy(-1.0, updater.compute(w, Vectors.zeros(n), 1, 1, regParam)._1, >>> gradientTotal) >>> >>> Why is the gradient in the updater being set as 0?And why the stepsize is >>> 0 and 1 respectively? >>> >>> Thank you very much for your help! >>> >>> All the best, >>> Ling Ling > > --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org