Github user dbtsai commented on a diff in the pull request:

    https://github.com/apache/spark/pull/702#discussion_r12499273
  
    --- Diff: docs/mllib-optimization.md ---
    @@ -128,10 +128,24 @@ is sampled, i.e. `$|S|=$ miniBatchFraction $\cdot n = 
1$`, then the algorithm is
     standard SGD. In that case, the step direction depends from the uniformly 
random sampling of the
     point.
     
    +### Limited-memory BFGS
    +[Limited-memory BFGS 
(L-BFGS)](http://en.wikipedia.org/wiki/Limited-memory_BFGS) is an optimization 
    +algorithm in the family of quasi-Newton methods to solve the optimization 
problems of the form 
    +`$\min_{\wv \in\R^d} \; f(\wv)$`. The L-BFGS approximates the objective 
function locally as a quadratic
    +without evaluating the second partial derivatives of the objective 
function to construct the 
    +Hessian matrix. The Hessian matrix is approximated by previous gradient 
evaluations, so there is no 
    +vertical scalability issue (the number of training features) when 
computing the Hessian matrix 
    +explicitly in Newton method. As a result, L-BFGS often achieves rapider 
convergence compared with 
    +other first-order optimization. 
     
    +Since the Hessian is constructed approximately from previous gradient 
evaluations, the objective 
    +function can not be changed during the optimization process. As a result, 
Stochastic L-BFGS will 
    +not work naively by just using miniBatch; therefore, we don't provide this 
until we have better 
    +understanding.  
    --- End diff --
    
    I decided to move those message to the code. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to