Sean,

Thanks a lot for your reply!

A few follow up questions:
1. numIterations should be 100, not 100*trainingSetSize, right?
2. My training set has 90k positive data points (with label 1) and 60k
negative data points (with label 0).
I set my numIterations to 100 as default. I still got the same predication
result: it all predicted to label 1.
And I'm sure my dataset is linearly separable because it has been run on
other frameworks like scikit-learn.

// code
val numIterations = 100;                
val regParam = 1
val svm = new SVMWithSGD()
svm.optimizer.setNumIterations(numIterations).setRegParam(regParam)
svm.setIntercept(true)                  
val model = svm.run(training)








-----
Thanks!
-Caron
--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/SVMWithSGD-default-threshold-tp18645p18741.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to