Github user sethah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/14834#discussion_r78233010
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala
 ---
    @@ -370,49 +420,102 @@ class LogisticRegression @Since("1.2.0") (
     
             val bcFeaturesStd = instances.context.broadcast(featuresStd)
             val costFun = new LogisticCostFun(instances, numClasses, 
$(fitIntercept),
    -          $(standardization), bcFeaturesStd, regParamL2, multinomial = 
false, $(aggregationDepth))
    +          $(standardization), bcFeaturesStd, regParamL2, multinomial = 
isMultinomial,
    +          $(aggregationDepth))
     
             val optimizer = if ($(elasticNetParam) == 0.0 || $(regParam) == 
0.0) {
               new BreezeLBFGS[BDV[Double]]($(maxIter), 10, $(tol))
             } else {
               val standardizationParam = $(standardization)
               def regParamL1Fun = (index: Int) => {
                 // Remove the L1 penalization on the intercept
    -            if (index == numFeatures) {
    +            val isIntercept = $(fitIntercept) && ((index + 1) % 
numFeaturesPlusIntercept == 0)
    +            if (isIntercept) {
                   0.0
                 } else {
                   if (standardizationParam) {
                     regParamL1
                   } else {
    +                val featureIndex = if ($(fitIntercept)) {
    +                  index % numFeaturesPlusIntercept
    +                } else {
    +                  index % numFeatures
    +                }
                     // If `standardization` is false, we still standardize the 
data
                     // to improve the rate of convergence; as a result, we 
have to
                     // perform this reverse standardization by penalizing each 
component
                     // differently to get effectively the same objective 
function when
                     // the training dataset is not standardized.
    -                if (featuresStd(index) != 0.0) regParamL1 / 
featuresStd(index) else 0.0
    +                if (featuresStd(featureIndex) != 0.0) {
    +                  regParamL1 / featuresStd(featureIndex)
    +                } else {
    +                  0.0
    +                }
                   }
                 }
               }
               new BreezeOWLQN[Int, BDV[Double]]($(maxIter), 10, regParamL1Fun, 
$(tol))
             }
     
             val initialCoefficientsWithIntercept =
    -          Vectors.zeros(if ($(fitIntercept)) numFeatures + 1 else 
numFeatures)
    -
    -        if (optInitialModel.isDefined && 
optInitialModel.get.coefficients.size != numFeatures) {
    -          val vecSize = optInitialModel.get.coefficients.size
    -          logWarning(
    -            s"Initial coefficients will be ignored!! As its size $vecSize 
did not match the " +
    -            s"expected size $numFeatures")
    +          Vectors.zeros(numCoefficientSets * numFeaturesPlusIntercept)
    +
    +        val initialModelIsValid = optInitialModel.exists { model =>
    --- End diff --
    
    TBH, I think `initialModelIsValid` is better. `isValidInitialModel` or 
`isInitialModelValid` sound like questions (more useful if they were methods 
that returned an answer), when this is actually a val that contains the answer. 
`if (initialModelIsValid)` reads more naturally than `if (isValidInitialModel)` 
also. That said, it's a private variable and so it's not a big issue. If others 
feel strongly then I can change it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to