Github user mengxr commented on a diff in the pull request:

    https://github.com/apache/spark/pull/7080#discussion_r33739057
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
 ---
    @@ -534,27 +556,41 @@ private class LogisticCostFun(
               case (aggregator1, aggregator2) => aggregator1.merge(aggregator2)
             })
     
    -    // regVal is the sum of weight squares for L2 regularization
    -    val norm = if (regParamL2 == 0.0) {
    -      0.0
    -    } else if (fitIntercept) {
    -      brzNorm(Vectors.dense(weights.toArray.slice(0, weights.size 
-1)).toBreeze, 2.0)
    -    } else {
    -      brzNorm(weights, 2.0)
    -    }
    -    val regVal = 0.5 * regParamL2 * norm * norm
    +    val totalGradientArray = logisticAggregator.gradient.toArray
     
    -    val loss = logisticAggregator.loss + regVal
    -    val gradient = logisticAggregator.gradient
    -
    -    if (fitIntercept) {
    -      val wArray = w.toArray.clone()
    -      wArray(wArray.length - 1) = 0.0
    -      axpy(regParamL2, Vectors.dense(wArray), gradient)
    +    // regVal is the sum of weight squares excluding intercept for L2 
regularization.
    +    val regVal = if (regParamL2 == 0.0) {
    +      0.0
         } else {
    -      axpy(regParamL2, w, gradient)
    +      var sum = 0.0
    +      w.foreachActive { (index, value) =>
    +        // If `fitIntercept` is true, the last term which is intercept 
doesn't
    +        // contribute to the regularization.
    +        if (index != numFeatures) {
    +          // The following code will compute the loss of the 
regularization; also
    +          // the gradient of the regularization, and add back to 
totalGradientArray.
    +          sum += {
    +            if (standardization) {
    +              totalGradientArray(index) += regParamL2 * value
    +              value * value
    +            } else {
    +              if (featuresStd(index) != 0.0) {
    +                // If `standardization` is false, we still standardize the 
data
    +                // to improve the rate of convergence; as a result, we 
have to
    +                // perform this reverse standardization by penalizing each 
component
    +                // differently to get effectively the same objective 
function when
    +                // the training dataset is not standardized.
    --- End diff --
    
    We should check R's implementation and discuss whether this is what users 
expect if they set `standardization` to false.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to