[ 
https://issues.apache.org/jira/browse/SPARK-1405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14073013#comment-14073013
 ] 

lukovnikov edited comment on SPARK-1405 at 7/24/14 9:10 AM:
------------------------------------------------------------

@Isaac, I think it's at 
https://github.com/yinxusen/spark/blob/lda/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDA.scala
 and here (https://github.com/apache/spark/pull/476/files) for the other 
changed files as well


was (Author: lukovnikov):
@Isaac, I think it's at 
https://github.com/yinxusen/spark/blob/lda/mllib/src/main/scala/org/apache/spark/mllib/clustering/LDA.scala

> parallel Latent Dirichlet Allocation (LDA) atop of spark in MLlib
> -----------------------------------------------------------------
>
>                 Key: SPARK-1405
>                 URL: https://issues.apache.org/jira/browse/SPARK-1405
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 1.1.0
>            Reporter: Xusen Yin
>            Assignee: Xusen Yin
>              Labels: features
>             Fix For: 0.9.0
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> Latent Dirichlet Allocation (a.k.a. LDA) is a topic model which extracts 
> topics from text corpus. Different with current machine learning algorithms 
> in MLlib, instead of using optimization algorithms such as gradient desent, 
> LDA uses expectation algorithms such as Gibbs sampling. 
> In this PR, I prepare a LDA implementation based on Gibbs sampling, with a 
> wholeTextFiles API (solved yet), a word segmentation (import from Lucene), 
> and a Gibbs sampling core.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to