[ 
https://issues.apache.org/jira/browse/SPARK-6706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14395622#comment-14395622
 ] 

Xi Shen commented on SPARK-6706:
--------------------------------

I know it is more like a user report than a technical report. But I am not 
familiar with Spark code, and I am currently busy with my study. I am happy to 
look deep into this issue, but it may not happen very soon.

As for your question **It's not clear whether you know it is stuck or simply 
still executing**. I can confirm it is still executing. I observe one of my CPU 
is constantly busy a Java process, and if the *k* value is not very large, say 
500, the job can finish after a long time.

> kmeans|| hangs for a long time if both k and vector dimension are large
> -----------------------------------------------------------------------
>
>                 Key: SPARK-6706
>                 URL: https://issues.apache.org/jira/browse/SPARK-6706
>             Project: Spark
>          Issue Type: Bug
>          Components: MLlib
>    Affects Versions: 1.2.1, 1.3.0
>         Environment: Windows 64bit, Linux 64bit
>            Reporter: Xi Shen
>            Assignee: Xiangrui Meng
>              Labels: performance
>         Attachments: kmeans-debug.7z
>
>
> When doing k-means cluster with the "kmeans||" algorithm which is the default 
> one. The algorithm hangs at some "collect" step for a long time.
> Settings:
> - k above 100
> - feature dimension about 360
> - total data size is about 100 MB
> The issue was first noticed with Spark 1.2.1. I tested with both local and 
> cluster mode. On Spark 1.3.0. I, I can also reproduce this issue with local 
> mode. **However, I do not have a 1.3.0 cluster environment for me to test.**



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to