[ 
https://issues.apache.org/jira/browse/HAMA-675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edward J. Yoon resolved HAMA-675.
---------------------------------
    Resolution: Duplicate
      Assignee: Edward J. Yoon

I'm close this issue as a Duplicate. This will be addressed in HAMA-961.

I'm thinking use multi-thread or LocalBSPJobRunner for mini-batch, and 
http://parameterserver.org/. See 
https://docs.google.com/drawings/d/1cjz50sGbpnFp2oab30cZ5MNYsaD3PtaBRVsUWuLiglI/edit?usp=sharing

Regarding interface, gradient() computing method and fetch/push methods that 
communicates with PM server will be needed. Interface designing is not a big 
deal I think.

> Deep Learning Computation Model
> -------------------------------
>
>                 Key: HAMA-675
>                 URL: https://issues.apache.org/jira/browse/HAMA-675
>             Project: Hama
>          Issue Type: New Feature
>          Components: machine learning
>            Reporter: Thomas Jungblut
>            Assignee: Edward J. Yoon
>
> Jeff Dean mentioned a computational model in this video: 
> http://techtalks.tv/talks/57639/
> There they are using the same idea of the Pregel system, they are defining a 
> upstream and a downstream computation function for a neuron (for cost and its 
> gradient). Then you can roughly tell about how the framework should partition 
> the neurons.
> All the messaging will be handled by the underlying messaging framework.
> Can we implement something equally?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to