[ https://issues.apache.org/jira/browse/SPARK-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Guoqiang Li updated SPARK-1405: ------------------------------- Attachment: performance_comparison.png Hi everyone. I did some performance comparison of PR 2388(contains a lot of optimization.) and Joey's implementation. Training data: 253064 document, 29696335 words, 75496 unique words. Iterative training 150 times, time-consuming in the following table ||The number of topics||[PR 2388|https://github.com/apache/spark/pull/2388]||[Joey's implementation|https://github.com/jegonzal/graphx/blob/LDA/graph/src/main/scala/org/apache/spark/graph/algorithms/TopicModeling.scala]|| |100 |43.95|47.98| |500|68.6|132.9| |2000|79.75|443| !performance_comparison.png! > parallel Latent Dirichlet Allocation (LDA) atop of spark in MLlib > ----------------------------------------------------------------- > > Key: SPARK-1405 > URL: https://issues.apache.org/jira/browse/SPARK-1405 > Project: Spark > Issue Type: New Feature > Components: MLlib > Reporter: Xusen Yin > Assignee: Guoqiang Li > Labels: features > Attachments: performance_comparison.png > > Original Estimate: 336h > Remaining Estimate: 336h > > Latent Dirichlet Allocation (a.k.a. LDA) is a topic model which extracts > topics from text corpus. Different with current machine learning algorithms > in MLlib, instead of using optimization algorithms such as gradient desent, > LDA uses expectation algorithms such as Gibbs sampling. > In this PR, I prepare a LDA implementation based on Gibbs sampling, with a > wholeTextFiles API (solved yet), a word segmentation (import from Lucene), > and a Gibbs sampling core. -- This message was sent by Atlassian JIRA (v6.3.4#6332) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org