[ 
https://issues.apache.org/jira/browse/SYSTEMML-2083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16460129#comment-16460129
 ] 

Janardhan commented on SYSTEMML-2083:
-------------------------------------

Hi GSOC applicants, unfortunately the project quota for each apache project is 
limited, hence a handful have been selected. I hope this shouldn't discourage 
you. This community is always willing to help you out, the best they can. If 
you don't know where to get started, just say "hi" to us with something about 
you at mail: [d...@systemml.apache.org|mailto:d...@systemml.apache.org] . 

Apache SystemML is a project having to incorporate lot of improvements, with 
folder structure
 # compiler (folder link: 
[https://github.com/apache/systemml/tree/master/src/main/java/org/apache/sysml] 
)
 # developing ML algorithms (folder link: 
[https://github.com/apache/systemml/tree/master/scripts/algorithms] )
 # Backends -> GPU (folder link: 
[https://github.com/apache/systemml/tree/master/src/main/cpp] )
 # python api (folder link: 
[https://github.com/apache/systemml/tree/master/src/main/python] )

Depending on your interest you can dive in any of the parts, to start with you 
can ask for a very small bug to get started.

Thank you.

> Language and runtime for parameter servers
> ------------------------------------------
>
>                 Key: SYSTEMML-2083
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-2083
>             Project: SystemML
>          Issue Type: Epic
>            Reporter: Matthias Boehm
>            Priority: Major
>              Labels: gsoc2018
>         Attachments: image-2018-02-14-12-18-48-932.png, 
> image-2018-02-14-12-21-00-932.png, image-2018-02-14-12-31-37-563.png
>
>
> SystemML already provides a rich set of execution strategies ranging from 
> local operations to large-scale computation on MapReduce or Spark. In this 
> context, we support both data-parallel (multi-threaded or distributed 
> operations) as well as task-parallel computation (multi-threaded or 
> distributed parfor loops). This epic aims to complement the existing 
> execution strategies by language and runtime primitives for parameter 
> servers, i.e., model-parallel execution. We use the terminology of 
> model-parallel execution with distributed data and distributed model to 
> differentiate them from the existing data-parallel operations. Target 
> applications are distributed deep learning and mini-batch algorithms in 
> general. These new abstractions will help making SystemML a unified framework 
> for small- and large-scale machine learning that supports all three major 
> execution strategies in a single framework.
>  
> A major challenge is the integration of stateful parameter servers and their 
> common push/pull primitives into an otherwise functional (and thus, 
> stateless) language. We will approach this challenge via a new builtin 
> function {{paramserv}} which internally maintains state but at the same time 
> fits into the runtime framework of stateless operations.
> Furthermore, we are interested in providing (1) different runtime backends 
> (local and distributed), (2) different parameter server modes (synchronous, 
> asynchronous, hogwild!, stale-synchronous), (3) different update frequencies 
> (batch, multi-batch, epoch), as well as (4) different architectures for 
> distributed data (1 parameter server, k workers) and distributed model (k1 
> parameter servers, k2 workers). 
>  
> *Note for GSOC students:* This is large project which will be broken down 
> into sub projects, so everybody will be having their share of pie.
> *Prerequistes:* Java, machine learning experience is a plus but not required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to