On Tue, Mar 21, 2017 at 11:35:06AM +0800, 孟憲妤 wrote:
> Hi ,
> 
> Since last time we discussed about memory partition of SGD, I did  some
> literature review on single-machine and multi-machine parallel
> machine-learning approaches. It seems like that GPU-based learning is the
> dominent form of parallelism among single machine approaches. I found this
> allreduce
> <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/IS140694.pdf>
> approach
> is quite interesting. It implements a data-parallel distributed SGD and
>  allows both parameter averaging, gradient aggregation, and iteration. I'm
> interested in particating in SGD project. Do you consider implementing this
> scheme appropriate in a GSoC project ?

Hi Hsienyu,

There's no good abstraction for GPUs in mlpack at this time, so that
might be difficult to put inside of mlpack.  Ideally mlpack should
provide a clean and consistent interface, and including GPU code
probably would break that.

But, there is a "secret project" in the works for a GPU-based Armadillo
library, so that may solve these issues. :)  (However, it may be some
time until that is ready; it certainly won't be available in time for
Summer of Code.)

Thanks,

Ryan

-- 
Ryan Curtin    | "Happy premise #2: There is no giant foot trying
r...@ratml.org | to squash me." - Kit Ramsey
_______________________________________________
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to