Github user yanboliang commented on a diff in the pull request:

    https://github.com/apache/spark/pull/16139#discussion_r91049472
  
    --- Diff: docs/ml-advanced.md ---
    @@ -59,17 +59,22 @@ Given $n$ weighted observations $(w_i, a_i, b_i)$:
     
     The number of features for each observation is $m$. We use the following 
weighted least squares formulation:
     `\[   
    -minimize_{x}\frac{1}{2} \sum_{i=1}^n \frac{w_i(a_i^T x 
-b_i)^2}{\sum_{k=1}^n w_k} + 
\frac{1}{2}\frac{\lambda}{\delta}\sum_{j=1}^m(\sigma_{j} x_{j})^2
    +\min_{\mathbf{x}}\frac{1}{2} \sum_{i=1}^n \frac{w_i(\mathbf{a}_i^T 
\mathbf{x} -b_i)^2}{\sum_{k=1}^n w_k} + 
\frac{1}{2}\frac{\lambda}{\delta}\sum_{j=1}^m(\sigma_{j} x_{j})^2
     \]`
     where $\lambda$ is the regularization parameter, $\delta$ is the 
population standard deviation of the label
     and $\sigma_j$ is the population standard deviation of the j-th feature 
column.
     
    -This objective function has an analytic solution and it requires only one 
pass over the data to collect necessary statistics to solve.
    -Unlike the original dataset which can only be stored in a distributed 
system,
    -these statistics can be loaded into memory on a single machine if the 
number of features is relatively small, and then we can solve the objective 
function through Cholesky factorization on the driver.
    +This objective function has an analytic solution and it requires only one 
pass over the data to collect necessary statistics to solve. For an
    +$n \times m$ data matrix, these statistics require only $O(m^2)$ storage 
and so can be stored on a single machine when $m$ (the number of features) is
    +relatively small. We can then solve the normal equations on a single 
machine using local methods like direct Cholesky factorization or iterative 
optimization programs.
     
    -WeightedLeastSquares only supports L2 regularization and provides options 
to enable or disable regularization and standardization.
    -In order to make the normal equation approach efficient, 
WeightedLeastSquares requires that the number of features be no more than 4096. 
For larger problems, use L-BFGS instead.
    +Spark ML currently supports two types of solvers for the normal equations: 
Cholesky factorization and Quasi-Newton methods (L-BFGS/OWL-QN). Cholesky 
factorization
    +depends on a positive definite covariance matrix (e.g. columns of the data 
matrix must be linearly independent) and will fail if this condition is 
violated. Quasi-Newton methods
    +are still capable of providing a reasonable solution even when the 
covariance matrix is not positive definite, so the normal equation solver can 
also fall back to 
    +Quasi-Newton methods in this case. This fallback is currently always 
enabled for the `LinearRegression` estimator.
    +
    +`WeightedLeastSquares` supports L1, L2, and elastic-net regularization and 
provides options to enable or disable regularization and standardization.
    --- End diff --
    
    Adding following clarification should be more clear?
    * For L2 or no regularization, Cholesky solver is the default choice and 
will fall back to Quasi-Newton solver if the covariance matrix is not positive 
definite.
    * For L1/elasticNet regularization, Quasi-Newton solver is the default and 
only choice.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to