I have a question regarding how the default standardization in the ML
version of the Logistic Regression (Spark 1.6) works.

Specifically about the next comments in the Spark Code:

/**
* Whether to standardize the training features before fitting the model.
* The coefficients of models will be always returned on the original scale,
* so it will be transparent for users. *Note that with/without
standardization,*
** the models should be always converged to the same solution when no
regularization*
** is applied.* In R's GLMNET package, the default behavior is true as well.
* Default is true.
*
* @group setParam
*/


Specifically I am having issues with understanding why the solution should
converge to the same weight values with/without standardization ?



Thanks !
-- 
Cesar Flores

Reply via email to