The RDD-based org.apache.spark.mllib.clustering.KMeansModel class defines a method called computeCost that is used to calculate the WCSS error of K-Means clusters (https://spark.apache.org/docs/latest/api/scala/org/apache/spark/mllib/clustering/KMeansModel.html). Is there an equivalent method of computeCost in the new ml library for K-Means?

Thanks in advance!

-- ND

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to