Hi,

Currently in ML, we use mini-batch gradient descent algorithm when running
logistic regression. But Spark-mllib recommends L-BFGS over mini-batch
gradient descent for faster convergence [1].

I tested both the implementation with the same dataset and gained an
improved accuracy in L-BFGS (80% vs 67% for SGD).

Shall we switch?

[1]
https://spark.apache.org/docs/latest/mllib-linear-methods.html#logistic-regression


-- 

Thanks & regards,
Nirmal

Associate Technical Lead - Data Technologies Team, WSO2 Inc.
Mobile: +94715779733
Blog: http://nirmalfdo.blogspot.com/
_______________________________________________
Dev mailing list
Dev@wso2.org
http://wso2.org/cgi-bin/mailman/listinfo/dev

Reply via email to