Github user dbtsai commented on the pull request:

    https://github.com/apache/spark/pull/1518#issuecomment-50663418
  
    I tried to make the bias really big to make the intercept smaller to avoid 
being regularized. The result is still quite different from R, and very 
sensitive to the strength of bias.
    
    Users may re-scale the features to improve the convergence of optimization 
process, and in order to get the same coefficients without scaling, each 
component has to be penalized differently. Also, users may know which feature 
is less important, and want to penalize more. 
    
    As a result, I still want to implement the full weighted regualizer, and 
de-couple the adaptive learning rate from updater. Let's talk in detail when we 
meet tomorrow. Thanks. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to