of gradientDescent as the optimization algorithm, it would be helpful if you
can post it.
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/MLLib-choosing-the-Loss-function-tp11738p11913.html
Sent from the Apache Spark User List mailing list archive
skrishna...@gmail.com
To: u...@spark.incubator.apache.org
Sent: Monday, August 11, 2014 11:52:04 AM
Subject: Re: [MLLib]:choosing the Loss function
Hi,
Thanks for the reference to the LBFGS optimizer.
I tried to use the LBFGS optimizer, but I am not able to pass it as an
input
this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/MLLib-choosing-the-Loss-function-tp11738.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
-
To unsubscribe, e-mail: user
?
thanks
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/MLLib-choosing-the-Loss-function-tp11738.html
Sent from the Apache Spark User List mailing list archive at Nabble.com
both much faster and more
accurate.
Burak
- Original Message -
From: SK skrishna...@gmail.com
To: u...@spark.incubator.apache.org
Sent: Thursday, August 7, 2014 6:31:14 PM
Subject: [MLLib]:choosing the Loss function
Hi,
According to the MLLib guide, there seems to be support for different