[GitHub] spark pull request: SPARK-1782: svd for sparse matrix using ARPACK

2014-07-04 Thread yangliuyu
Github user yangliuyu commented on the pull request:

https://github.com/apache/spark/pull/964#issuecomment-48037519
  
@mengxr yes, userSize is 10


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: SPARK-1782: svd for sparse matrix using ARPACK

2014-07-03 Thread yangliuyu
Github user yangliuyu commented on the pull request:

https://github.com/apache/spark/pull/964#issuecomment-47886598
  
@mengxr k is 100, rCond, tol and maxIterations are all defaults, i.e. 1e-9, 
1e-10, 300. Change iteration number from 300 to 200 will not reduce too much 
time cost 10% (test on another small size of dataset 61794 x 100, k=99 ~110s)

btw, improve RowMatrix multiply patch works well.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: SPARK-1782: svd for sparse matrix using ARPACK

2014-07-03 Thread yangliuyu
Github user yangliuyu commented on the pull request:

https://github.com/apache/spark/pull/964#issuecomment-48007755
  
@vrilleup I missed persist the RDD[Vector], but after added, it only help 
reduce time cost from 20+s to 10+s, subsequent aggregate tasks still take more 
than 10s. For the case stage 47, Scheduler delay and gc takes too much time. 
The matrix is 800371 x 10,  29898284 non-zeros. Our testing env only has 16 
cores, so don't know whether it will get better performance on large number of 
cores.

```scala
val data = input.map { case (sid, uid) =
  (songId2IndexMap(sid), userId2IndexMap(uid))
}.groupByKey().sortByKey()
  .map({ case (sid, uids) =
  val uidList = uids.toSet.toList.sorted
  val uidSeq = uidList.map(uid = (uid, v))
  Vectors.sparse(userSize, uidSeq)
}).persist()
val mat = new RowMatrix(data)
val svd = mat.computeSparseSVD(100, computeU = true)
```


![song_clustering_svd_sparse_vector__-_spark_stages](https://cloud.githubusercontent.com/assets/1361821/3478215/f6162636-0330-11e4-8e5f-a56e36ba874b.png)


![song_clustering_svd_sparse_vector__-_details_for_stage_35](https://cloud.githubusercontent.com/assets/1361821/3478217/206e610a-0331-11e4-86ab-342ee3ce3ed0.png)


![song_clustering_svd_sparse_vector__-_storage](https://cloud.githubusercontent.com/assets/1361821/3478238/db2ca222-0331-11e4-96a7-b7c1c1af284d.png)



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] spark pull request: SPARK-1782: svd for sparse matrix using ARPACK

2014-07-02 Thread yangliuyu
Github user yangliuyu commented on the pull request:

https://github.com/apache/spark/pull/964#issuecomment-47795986
  
@vrilleup about your performance test result for real matrices, what's the 
cpu usage rate for each executor? We run svd on a 205899 x 1000 sparse matrix, 
1850566 non-zeros, (Spark standalone mode, run 2 worker instance on one 
machine, one executor for each worker, 16 cores total) it cost 18 minutes not 
including getting the U matrix, cpu usage is about 200%-400% per executor. 
Seems not a reasonable time cost even on small number of executors. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---