[ 
https://issues.apache.org/jira/browse/SPARK-2308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14078940#comment-14078940
 ] 

Xiangrui Meng commented on SPARK-2308:
--------------------------------------

I was looking for point such that normal k-means won't miss a cluster center 
while k-means mini-batch would. Both of them missed the smallest center in the 
`1000, 100, 10, 1` setting. It is because that the single-point center has a 
high probability to be selected but it is still small compared to the base. 
k-means|| may help because it tends to select more than k-means++. Even if that 
center is selected during the initialization, is it possible that k-means 
mini-batch samples no point from that center and reset the center? (Just want 
to understand the implementation better.)

Btw, for the PR, instead of adding a new class, is it possible to make it a new 
parameter `setMiniBatchFraction` to the current KMeans implementation?

> Add KMeans MiniBatch clustering algorithm to MLlib
> --------------------------------------------------
>
>                 Key: SPARK-2308
>                 URL: https://issues.apache.org/jira/browse/SPARK-2308
>             Project: Spark
>          Issue Type: New Feature
>          Components: MLlib
>            Reporter: RJ Nowling
>            Priority: Minor
>         Attachments: many_small_centers.pdf, uneven_centers.pdf
>
>
> Mini-batch is a version of KMeans that uses a randomly-sampled subset of the 
> data points in each iteration instead of the full set of data points, 
> improving performance (and in some cases, accuracy).  The mini-batch version 
> is compatible with the KMeans|| initialization algorithm currently 
> implemented in MLlib.
> I suggest adding KMeans Mini-batch as an alternative.
> I'd like this to be assigned to me.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to