[ 
https://issues.apache.org/jira/browse/SPARK-1270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14156964#comment-14156964
 ] 

Peng Cheng commented on SPARK-1270:
-----------------------------------

Yo, any follow up story on this one?
I'm curious to know the local update part, as DistBelief has non-local model 
server shards.

> An optimized gradient descent implementation
> --------------------------------------------
>
>                 Key: SPARK-1270
>                 URL: https://issues.apache.org/jira/browse/SPARK-1270
>             Project: Spark
>          Issue Type: Improvement
>    Affects Versions: 1.0.0
>            Reporter: Xusen Yin
>              Labels: GradientDescent, MLLib,
>             Fix For: 1.0.0
>
>
> Current implementation of GradientDescent is inefficient in some aspects, 
> especially in high-latency network. I propose a new implementation of 
> GradientDescent, which follows a parallelism model called 
> GradientDescentWithLocalUpdate, inspired by Jeff Dean's DistBelief and Eric 
> Xing's SSP. With a few modifications of runMiniBatchSGD, the 
> GradientDescentWithLocalUpdate can outperform the original sequential version 
> by about 4x without sacrificing accuracy, and can be easily adopted by most 
> classification and regression algorithms in MLlib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to