Github user sethah commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16139#discussion_r91427507
  
    --- Diff: docs/ml-advanced.md ---
    @@ -59,17 +59,25 @@ Given $n$ weighted observations $(w_i, a_i, b_i)$:
     
     The number of features for each observation is $m$. We use the following 
weighted least squares formulation:
     `\[   
    -minimize_{x}\frac{1}{2} \sum_{i=1}^n \frac{w_i(a_i^T x 
-b_i)^2}{\sum_{k=1}^n w_k} + 
\frac{1}{2}\frac{\lambda}{\delta}\sum_{j=1}^m(\sigma_{j} x_{j})^2
    +\min_{\mathbf{x}}\frac{1}{2} \sum_{i=1}^n \frac{w_i(\mathbf{a}_i^T 
\mathbf{x} -b_i)^2}{\sum_{k=1}^n w_k} + 
\frac{\lambda}{\delta}\left[\frac{1}{2}(1 - \alpha)\sum_{j=1}^m(\sigma_j x_j)^2 
+ \alpha\sum_{j=1}^m |\sigma_j x_j|\right]
     \]`
    -where $\lambda$ is the regularization parameter, $\delta$ is the 
population standard deviation of the label
    +where $\lambda$ is the regularization parameter, $\alpha$ is the 
elastic-net mixing parameter, $\delta$ is the population standard deviation of 
the label
     and $\sigma_j$ is the population standard deviation of the j-th feature 
column.
     
    -This objective function has an analytic solution and it requires only one 
pass over the data to collect necessary statistics to solve.
    -Unlike the original dataset which can only be stored in a distributed 
system,
    -these statistics can be loaded into memory on a single machine if the 
number of features is relatively small, and then we can solve the objective 
function through Cholesky factorization on the driver.
    +This objective function requires only one pass over the data to collect 
the statistics necessary to solve it. For an
    +$n \times m$ data matrix, these statistics require only $O(m^2)$ storage 
and so can be stored on a single machine when $m$ (the number of features) is
    +relatively small. We can then solve the normal equations on a single 
machine using local methods like direct Cholesky factorization or iterative 
optimization programs.
     
    -WeightedLeastSquares only supports L2 regularization and provides options 
to enable or disable regularization and standardization.
    -In order to make the normal equation approach efficient, 
WeightedLeastSquares requires that the number of features be no more than 4096. 
For larger problems, use L-BFGS instead.
    +Spark MLlib currently supports two types of solvers for the normal 
equations: Cholesky factorization and Quasi-Newton methods (L-BFGS/OWL-QN). 
Cholesky factorization
    +depends on a positive definite covariance matrix (e.g. columns of the data 
matrix must be linearly independent) and will fail if this condition is 
violated. Quasi-Newton methods
    --- End diff --
    
    Done, thanks!


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to