[
https://issues.apache.org/jira/browse/SPARK-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14151017#comment-14151017
]
Guoqiang Li edited comment on SPARK-1405 at 9/28/14 12:05 PM:
--------------------------------------------------------------
Hi everyone.
I did some performance comparison of PR 2388(contains a lot of optimization.)
and Joey's implementation.
Training data: 253064 document, 29696335 words, 75496 unique words.
All tests were run on precisely the same 4 node cluster, 36 executors(36
cores, 216g memory).
Iterative training 150 times, time-consuming in the following table
||The number of topics||[PR
2388|https://github.com/apache/spark/pull/2388]||[Joey's
implementation|https://github.com/jegonzal/graphx/blob/LDA/graph/src/main/scala/org/apache/spark/graph/algorithms/TopicModeling.scala]||
|100 |43.95|47.98|
|500|68.6|132.9|
|2000|79.75|443|
!performance_comparison.png!
was (Author: gq):
Hi everyone.
I did some performance comparison of PR 2388(contains a lot of optimization.)
and Joey's implementation.
Training data: 253064 document, 29696335 words, 75496 unique words.
Iterative training 150 times, time-consuming in the following table
||The number of topics||[PR
2388|https://github.com/apache/spark/pull/2388]||[Joey's
implementation|https://github.com/jegonzal/graphx/blob/LDA/graph/src/main/scala/org/apache/spark/graph/algorithms/TopicModeling.scala]||
|100 |43.95|47.98|
|500|68.6|132.9|
|2000|79.75|443|
!performance_comparison.png!
> parallel Latent Dirichlet Allocation (LDA) atop of spark in MLlib
> -----------------------------------------------------------------
>
> Key: SPARK-1405
> URL: https://issues.apache.org/jira/browse/SPARK-1405
> Project: Spark
> Issue Type: New Feature
> Components: MLlib
> Reporter: Xusen Yin
> Assignee: Guoqiang Li
> Labels: features
> Attachments: performance_comparison.png
>
> Original Estimate: 336h
> Remaining Estimate: 336h
>
> Latent Dirichlet Allocation (a.k.a. LDA) is a topic model which extracts
> topics from text corpus. Different with current machine learning algorithms
> in MLlib, instead of using optimization algorithms such as gradient desent,
> LDA uses expectation algorithms such as Gibbs sampling.
> In this PR, I prepare a LDA implementation based on Gibbs sampling, with a
> wholeTextFiles API (solved yet), a word segmentation (import from Lucene),
> and a Gibbs sampling core.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]