Hi all,

Recently I played a little bit with both naive and mllib python sample codes
for logistic regression.
In short I wanted to compare naive and non naive logistic regression results
using same input weights and data. 
So, I modified slightly both sample codes to use the same initial weights
and generated a text file containing lines of label and features separated
by spaces. 

After one iteration the computed weights are the same (nice !), but on the
second iteration the computed weights are different (and obviously for the
remaining iterations too)

Maybe this behaviour is related to the default regularizer and
regularization parameter used by the mllib implementation of
LogisticRegressionWithSGD ? What is the difference between the naive
implementation and the mllib implementation of logisticRegression with
stochastic gradient descent ?

Thanks

Cedric




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/About-logistic-regression-sample-codes-in-pyspark-tp21015.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to