[ 
https://issues.apache.org/jira/browse/SPARK-34448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17293017#comment-17293017
 ] 

Yakov Kerzhner commented on SPARK-34448:
----------------------------------------

I took a look over the weekend.  It seems good, and somewhat matches what I did 
in my test example where I centered before running the fitting.  Unfortunately, 
I am not very well versed in scala, so actually reviewing the code is a bit 
hard.  I appreciate the printouts for the test case in the PR, and I now 
understand why spark was returning the log(odds) for the intercept:  The 
division of a non centered vector with a small std dev creates a vector with 
very large entries that looks roughly like a constant vector.  When the 
minimizer computes the gradient, it assigns far more weight to this big vector 
than it does the intercept, as the magnitude appears more important than the 
fact that it isnt exactly constant.  When the optimizer then moves in the 
direction of the gradient, it finds that the value of the objective function 
actually increased (because of the fact that this big vector isnt exactly 
constant), and backtracks several times.  By the time it has backtracked enough 
to actually get a lower value on the objective function, the movement of the 
intercept is nearly 0.  So essentially, the intercept never moves during the 
entire calibration.  This is also why it takes so much longer (because of all 
the backtracking).  Once things are centered, the entries in the gradient for 
the intercept become dominant compared to the vector that is sort of constant, 
and so the minimizer begins adjusting the intercept, and moves it to the 
correct spot.

> Binary logistic regression incorrectly computes the intercept and 
> coefficients when data is not centered
> --------------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-34448
>                 URL: https://issues.apache.org/jira/browse/SPARK-34448
>             Project: Spark
>          Issue Type: Bug
>          Components: ML, MLlib
>    Affects Versions: 2.4.5, 3.0.0
>            Reporter: Yakov Kerzhner
>            Priority: Major
>              Labels: correctness
>
> I have written up a fairly detailed gist that includes code to reproduce the 
> bug, as well as the output of the code and some commentary:
> [https://gist.github.com/ykerzhner/51358780a6a4cc33266515f17bf98a96]
> To summarize: under certain conditions, the minimization that fits a binary 
> logistic regression contains a bug that pulls the intercept value towards the 
> log(odds) of the target data.  This is mathematically only correct when the 
> data comes from distributions with zero means.  In general, this gives 
> incorrect intercept values, and consequently incorrect coefficients as well.
> As I am not so familiar with the spark code base, I have not been able to 
> find this bug within the spark code itself.  A hint to this bug is here: 
> [https://github.com/apache/spark/blob/master/mllib/src/main/scala/org/apache/spark/ml/classification/LogisticRegression.scala#L894-L904]
> based on the code, I don't believe that the features have zero means at this 
> point, and so this heuristic is incorrect.  But an incorrect starting point 
> does not explain this bug.  The minimizer should drift to the correct place.  
> I was not able to find the code of the actual objective function that is 
> being minimized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to