[ 
https://issues.apache.org/jira/browse/SPARK-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yanbo Liang updated SPARK-13490:
--------------------------------
    Description: Like SPARK-13132 for LogisticRegression, LinearRegression with 
L1 regularization should also cache the value of the standardization rather 
than re-fetching it from the ParamMap for every OWLQN iteration.  (was: Like 
SPARK-13132 for LogisticRegression, when LinearRegression with L1 
regularization, the inner functor passed to the quasi-newton optimizer in 
{{org.apache.spark.ml.regression.LinearRegression#train}} makes repeated calls 
to {{$(standardization)}}. This ultimately involves repeated string 
interpolation triggered by {{org.apache.spark.ml.param.Param#hashCode}}. We 
should cache the value of the standardization rather than re-fetching it from 
the ParamMap for every iteration.)

> ML LinearRegression should cache standardization param value
> ------------------------------------------------------------
>
>                 Key: SPARK-13490
>                 URL: https://issues.apache.org/jira/browse/SPARK-13490
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML
>            Reporter: Yanbo Liang
>            Priority: Minor
>
> Like SPARK-13132 for LogisticRegression, LinearRegression with L1 
> regularization should also cache the value of the standardization rather than 
> re-fetching it from the ParamMap for every OWLQN iteration.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to