[ 
https://issues.apache.org/jira/browse/SPARK-1359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15219190#comment-15219190
 ] 

Yu Ishikawa commented on SPARK-1359:
------------------------------------

[~mbaddar] Since the current ann in mllib depends on `GradientDescent`, we 
should modify the efficienty.
How do we evaluate new implementation against the current implementation? And 
What are better tasks to evaluate it?
- Metrics
1. Convergence Effieiency
2. Compute Cost
3. Compute Time
4. Other
- Task
1. Logistic Regression and Linear Regression with random generated data
2. Logistic Regression and Linear Regression with any Kaggle data
3. Other

I make an implementation of Parallelized Stochastic Gradient Descent.
https://github.com/yu-iskw/spark-parallelized-sgd

> SGD implementation is not efficient
> -----------------------------------
>
>                 Key: SPARK-1359
>                 URL: https://issues.apache.org/jira/browse/SPARK-1359
>             Project: Spark
>          Issue Type: Improvement
>          Components: MLlib
>    Affects Versions: 0.9.0, 1.0.0
>            Reporter: Xiangrui Meng
>
> The SGD implementation samples a mini-batch to compute the stochastic 
> gradient. This is not efficient because examples are provided via an iterator 
> interface. We have to scan all of them to obtain a sample.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to