Github user allwefantasy commented on the pull request:

    https://github.com/apache/spark/pull/1983#issuecomment-55089256
  
    @witgo 看了你的性能测试 你
里面没有提到迭代次数。是多少次迭代呢?一个小时就完成了。
    
    我这里也重新测试了一份数据:
    
    The cluster resource        60 executors(60 cores, 220g memory)
    The corpus size     240000 document
    The number of iterations    100
    The number of term   80000
    The number of topics        500
    alpha       0.1
    beta        0.01
    
    基本上一轮迭代就要40-60分钟。耗时非常之长。测试代ç 
å¦‚下:
    
    
        val data = 
sc.textFile(s"/output/william/spark-lda-data/trainings").sample(false,0.1)
        val parsedData = data.map { line =>
          val parts = line.split(',')
          val values = parts(1).split(' ').map{k=>
            val Array(pos,v) = k.split(":")
            (pos.toInt,v.toInt)
          }.toMap[Int,Int]
          Document(parts(0).toInt,(0 until wordInfo.value.size).map(k=> 
values.getOrElse(k,0)).toArray)
        }
        val (topicModel,documents) = 
org.apache.spark.mllib.clustering.LDA.train(parsedData,wordInfo.value.size,500,30,(0.7*30).toInt,0.1,0.01)
      



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to