i'm struggling with implementing a few algorithms with spark.  hope to get
help from the community.

most of the machine learning algorithms today are "sequential", while spark
is all about "parallelism".  it seems to me that using spark doesn't
actually help much, because in most cases you can't really paralellize a
sequential algorithm.

there must be some strong reasons why mllib was created and so many people
claim spark is ideal for machine learning.

what are those reasons?  

what are some specific examples when & how to use spark to implement
"sequential" machine learning algorithms?

any commen/feedback/answer is much appreciated.

thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/is-spark-a-good-fit-for-sequential-machine-learning-algorithms-tp18000.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to