Github user dbtsai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/6386#discussion_r30958988
  
    --- Diff: 
mllib/src/main/scala/org/apache/spark/mllib/classification/LogisticRegression.scala
 ---
    @@ -363,4 +370,34 @@ class LogisticRegressionWithLBFGS
           new LogisticRegressionModel(weights, intercept, numFeatures, 
numOfLinearPredictor + 1)
         }
       }
    +
    +  /**
    +   * Run the algorithm with the configured parameters on an input RDD
    +   * of LabeledPoint entries starting from the initial weights provided.
    +   * If a known updater is used calls the ml implementation, to avoid
    +   * applying a regularization penalty to the intercept, otherwise
    +   * defaults to the mllib implementation. If more than two classes
    +   * or feature scaling is disabled, always uses mllib implementation.
    +   */
    +  override def run(input: RDD[LabeledPoint], initialWeights: Vector): 
LogisticRegressionModel = {
    +    // ml's Logisitic regression only supports binary classifcation 
currently.
    +    if (numOfLinearPredictor == 1 && useFeatureScaling) {
    +      def runWithMlLogisitcRegression(elasticNetParam: Double) = {
    --- End diff --
    
    This will be another feature. 
    
    When people train LoR/LiR with multiple lambda of regularizations for 
cross-validation, the training algorithm will start from the largest lambda and 
return the model. The model will be used as the initial condition for the 
second largest lambda. The process will be repeated until all the lambdas are 
trained. By using the previous model as initial weights, the convergence rate 
will be a way faster. http://www.jstatsoft.org/v33/i01/paper
    
    As a result, in order to do so, we need to have ability to specify initial 
weights. Feel free to add private API to set weights. If the dim of weights is 
different from the data, then we can use the default one as initial condition. 
    
    PS, once this private api is added, we can hook it up with CrossValidation 
API to train multiple lambdas efficiently. Currently, with multiple lambda, we 
train from scratch without using the information from previous results. No JIRA 
now, you can open one if you are interested in this.
    



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to