Re: [hpx-users] Information regarding projects

2017-04-02 Thread Aditya
Hello again,

It would be great if some of you could guide me through the project
selection phase so that I can make my proposal as soon as possible and get
it reviewed too.

Regards,
Aditya




On Sun, Apr 2, 2017 at 5:21 AM, Aditya  wrote:

> Hello again,
>
> It would be great if someone shed light on the below listed projects too
>
> 1. Applying Machine Learning Techniques on HPX Parallel Algorithms
> 2. Adding Lustre backend to hpxio
> 3. All to All Communications
>
> I believe I will be suitable for projects 2 and 3 (above). As part of my
> undergrad thesis (mentioned in the earlier email) I worked with Lustre
> briefly (we decided, lustre was an overkill for our scenario as we'd have
> to re organize data among nodes even after the parallel read). I have
> worked with MPI on several projects (my thesis and projects in the parallel
> computing course) and have a basic understanding of all to all
> communications work.
>
> If someone could explain what would be involved in project 1, it'd be
> great.
>
> Also, please let me know what is expected of the student in projects 2 and
> 3.
>
> Thanks again,
> Aditya
>
>
>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Information regarding projects

2017-04-02 Thread Hartmut Kaiser
Hey Aditya,

> It would be great if some of you could guide me through the project
> selection phase so that I can make my proposal as soon as possible and get
> it reviewed too.

The machine learning project aims at using ML techniques to select runtime 
parameters based on information collected at compile time. For instance in 
order to decide whether to parallelize a particular loop the compiler looks at 
the loop body and extracts certain features, like the number of operations or 
the number of conditionals etc. It conveys this information to the runtime 
system through generated code. The runtime adds a couple of dynamic parameters 
like number of requested iterations and feeds this into a ML model to decide 
whether to run the loop in parallel or not. We would like to support this with 
a way for the user to be able to automatically train the ML model on his own 
code.

I can't say anything about the Lustre backend, except that Lustre is a 
high-performance file system which we would like to be able to directly talk to 
from HPX. If you don't know what Lustre is this is not for you.

All to All communications is a nice project, actually. In HPX we sorely need to 
implement a set of global communication patterns like broadcast, allgather, 
alltoall etc. All of this is well known (see MPI) except that we would like to 
adapt those to the asynchronous nature of HPX.

HTH
Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu


> 
> Regards,
> Aditya
> 
> 
> 
> On Sun, Apr 2, 2017 at 5:21 AM, Aditya  wrote:
> Hello again,
> 
> It would be great if someone shed light on the below listed projects too
> 
> 1. Applying Machine Learning Techniques on HPX Parallel Algorithms
> 2. Adding Lustre backend to hpxio
> 3. All to All Communications
> 
> I believe I will be suitable for projects 2 and 3 (above). As part of my
> undergrad thesis (mentioned in the earlier email) I worked with Lustre
> briefly (we decided, lustre was an overkill for our scenario as we'd have
> to re organize data among nodes even after the parallel read). I have
> worked with MPI on several projects (my thesis and projects in the
> parallel computing course) and have a basic understanding of all to all
> communications work.
> 
> If someone could explain what would be involved in project 1, it'd be
> great.
> 
> Also, please let me know what is expected of the student in projects 2 and
> 3.
> 
> Thanks again,
> Aditya
> 
> 


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Information regarding projects

2017-04-02 Thread Zahra Khatami
Hi Aditya,

Thank you for your interest in the machine learning project. As Dr. Kaiser
explained, a compiler gathers static information for ML, then ML will
select the parameters, such as chunk sizes, for HPX's techniques, such as
loop.
We have worked on this project since a couple of months ago, and so far we
have got interesting results from our implementation.
Our focus in the Summer is to implement our technique on a distributed
applications.
So if you have a background in ML and distributed computing, it would be
enough to work on this topic.
I am pretty sure that this phase will result in a conference paper as its
new and super interesting ;)
So if you are interested in this project, go ahead and write your proposal
before its deadline.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 7:04 AM, Hartmut Kaiser 
wrote:

> Hey Aditya,
>
> > It would be great if some of you could guide me through the project
> > selection phase so that I can make my proposal as soon as possible and
> get
> > it reviewed too.
>
> The machine learning project aims at using ML techniques to select runtime
> parameters based on information collected at compile time. For instance in
> order to decide whether to parallelize a particular loop the compiler looks
> at the loop body and extracts certain features, like the number of
> operations or the number of conditionals etc. It conveys this information
> to the runtime system through generated code. The runtime adds a couple of
> dynamic parameters like number of requested iterations and feeds this into
> a ML model to decide whether to run the loop in parallel or not. We would
> like to support this with a way for the user to be able to automatically
> train the ML model on his own code.
>
> I can't say anything about the Lustre backend, except that Lustre is a
> high-performance file system which we would like to be able to directly
> talk to from HPX. If you don't know what Lustre is this is not for you.
>
> All to All communications is a nice project, actually. In HPX we sorely
> need to implement a set of global communication patterns like broadcast,
> allgather, alltoall etc. All of this is well known (see MPI) except that we
> would like to adapt those to the asynchronous nature of HPX.
>
> HTH
> Regards Hartmut
> ---
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
>
>
> >
> > Regards,
> > Aditya
> >
> >
> >
> > On Sun, Apr 2, 2017 at 5:21 AM, Aditya  wrote:
> > Hello again,
> >
> > It would be great if someone shed light on the below listed projects too
> >
> > 1. Applying Machine Learning Techniques on HPX Parallel Algorithms
> > 2. Adding Lustre backend to hpxio
> > 3. All to All Communications
> >
> > I believe I will be suitable for projects 2 and 3 (above). As part of my
> > undergrad thesis (mentioned in the earlier email) I worked with Lustre
> > briefly (we decided, lustre was an overkill for our scenario as we'd have
> > to re organize data among nodes even after the parallel read). I have
> > worked with MPI on several projects (my thesis and projects in the
> > parallel computing course) and have a basic understanding of all to all
> > communications work.
> >
> > If someone could explain what would be involved in project 1, it'd be
> > great.
> >
> > Also, please let me know what is expected of the student in projects 2
> and
> > 3.
> >
> > Thanks again,
> > Aditya
> >
> >
>
>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] FW: Applying machine learning techniques on HPX algorithms

2017-04-02 Thread Zahra Khatami
Hi Madhavan,

Our focus in the Summer is to implement our technique on a distributed
applications. This ML algorithms are a basic ML algorithms, which we have
implemented logistic regression model so far.
We have introduces a new ClangTool which make runtime technique, such as
for_each, to implement ML algorithms for selecting its parameters, such as
chunk size.

Madhavan, as I remember you told me that you have already submit a proposal
for another project, am I right?
And far as I know, a student cannot work on more than one project, and s/he
will not be paid for more than one.
So, I am not sure if you can work on two projects at the same time.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 1:10 AM, #SESHADRI MADHAVAN# <
madhavan...@e.ntu.edu.sg> wrote:

> Hi Zahra,
>
>
>
> I had a brief look at the code. Can I understand the direction in which
> you want to proceed with the project?
>
>
>
> Currently, I see that a few basic algorithms have been implemented over
> the hpx framework. So, am I right to assume that more basic algorithms are
> to be implemented on top of the hpx framework?
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* #SESHADRI MADHAVAN#
> *Sent:* Friday, March 31, 2017 12:34 AM
> *To:* 'Zahra Khatami' ;
> hpx-users@stellar.cct.lsu.edu
> *Subject:* RE: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Zara,
>
>
>
> I have already submitted a proposal for HPX (HPXCL), so I won’t be
> submitting for this one.
>
>
>
> But I shall chip in contribution for this one as well as I find this to be
> an interesting area. My summer is currently free, so I shouldn’t have a
> problem in contributing to this in addition to HPXCL. I will begin by
> taking a look at the code base [1] and shall discuss further with you.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> [1] https://github.com/STEllAR-GROUP/hpxML
>
>
>
> *From:* Zahra Khatami [mailto:z.khatam...@gmail.com
> ]
> *Sent:* Friday, March 31, 2017 12:25 AM
> *To:* hpx-users@stellar.cct.lsu.edu; #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg>
> *Subject:* Re: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Madhavan,
>
>
>
> Thank you for your interest. I would be happy to work with you on this
> project.
>
> This project is mainly about combining machine learning techniques,
> compiler optimizations and runtime methods, which is a new and super
> interesting idea for our group at least ;)
>
> We have implemented a major part in connecting these three areas together.
> However, we have tested it on a single node, not on a distributed version.
>
> As you have worked with Hadoop, so I believe that you have a good
> background in a distributed computing area.
>
> For the next step of this project, focused on a Summer, we plan to
> implement our proposed techniques on a distributed applications. The first
> phase of this would be implementing distributed machine learning
> techniques, such NN or SVM.
>
> Then we can analyze big data and design a learning model for our
> algorithms.
>
>
>
> So please start writing your proposal, emphasize on extending ideas about
> implementing distributed machine learning techniques with HPX and targeting
> them for tying compiler and runtime techniques.
>
> The proposal should be submitted before deadline (April 3rd). So I would
> suggest you to give me a first draft earlier, so we can work together for
> its final submission.
>
>
>
>
> Best Regards,
>
> * Zahra Khatami* | PhD Student
>
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
>
> 2027 Digital Media Center (DMC)
>
> Baton Rouge, LA 70803
>
>
>
>
>
> On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg> wrote:
>
> Hi Zahra,
>
>
>
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
>
>
>
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into 
> "Applying
> machine learning techniques on HPX algorithms". The project seems
> interesting and I have had some background implementing Machine Learning
> algorithms on Hadoop, predominantly in Java. But I have been through the
> process of designing and optimizing algorithms for execution in parallel
> which I believe will be useful for this. Let me know how I can get started.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* hpx-users-boun...@stellar.cct.lsu.edu [mailto:hpx-users-bounces@
> stellar.cct.lsu.edu ] *On Behalf
> Of *Zahra Khatami
> *Sent:* Thursday, March 30, 2017 11:56 PM
> *To:* denis.bl...@outlook.com
> *Cc:* hpx-users@stellar.cct.lsu.edu
> *Subject:* Re: [hpx-users] [GSoC 2017] Proposal for re-implementing
> hpx:

Re: [hpx-users] FW: Applying machine learning techniques on HPX algorithms

2017-04-02 Thread Hartmut Kaiser

> Our focus in the Summer is to implement our technique on a distributed
> applications. This ML algorithms are a basic ML algorithms, which we have
> implemented logistic regression model so far.
> We have introduces a new ClangTool which make runtime technique, such as
> for_each, to implement ML algorithms for selecting its parameters, such as
> chunk size.
> 
> Madhavan, as I remember you told me that you have already submit a
> proposal for another project, am I right?
> And far as I know, a student cannot work on more than one project, and
> s/he will not be paid for more than one.
> So, I am not sure if you can work on two projects at the same time.

An alternative possibility would be to look at the existing solutions (as 
described above) and to devise a method for the user to be able to train the 
used models using his/her own applications.

Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu

> 
> 
> 
> 
> Best Regards,
> 
> Zahra Khatami | PhD Student
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
> 2027 Digital Media Center (DMC)
> Baton Rouge, LA 70803
> 
> 
> On Sun, Apr 2, 2017 at 1:10 AM, #SESHADRI MADHAVAN#
>  wrote:
> Hi Zahra,
> 
> I had a brief look at the code. Can I understand the direction in which
> you want to proceed with the project?
> 
> Currently, I see that a few basic algorithms have been implemented over
> the hpx framework. So, am I right to assume that more basic algorithms are
> to be implemented on top of the hpx framework?
> 
> Best Regards,
> Madhavan
> 
> From: #SESHADRI MADHAVAN#
> Sent: Friday, March 31, 2017 12:34 AM
> To: 'Zahra Khatami' ; hpx-users@stellar.cct.lsu.edu
> Subject: RE: [hpx-users] Applying machine learning techniques on HPX
> algorithms
> 
> Hi Zara,
> 
> I have already submitted a proposal for HPX (HPXCL), so I won’t be
> submitting for this one.
> 
> But I shall chip in contribution for this one as well as I find this to be
> an interesting area. My summer is currently free, so I shouldn’t have a
> problem in contributing to this in addition to HPXCL. I will begin by
> taking a look at the code base [1] and shall discuss further with you.
> 
> Best Regards,
> Madhavan
> 
> [1] https://github.com/STEllAR-GROUP/hpxML
> 
> From: Zahra Khatami [mailto:z.khatam...@gmail.com]
> Sent: Friday, March 31, 2017 12:25 AM
> To: hpx-users@stellar.cct.lsu.edu; #SESHADRI MADHAVAN#
> 
> Subject: Re: [hpx-users] Applying machine learning techniques on HPX
> algorithms
> 
> Hi Madhavan,
> 
> Thank you for your interest. I would be happy to work with you on this
> project.
> This project is mainly about combining machine learning techniques,
> compiler optimizations and runtime methods, which is a new and super
> interesting idea for our group at least ;)
> We have implemented a major part in connecting these three areas together.
> However, we have tested it on a single node, not on a distributed
> version.
> As you have worked with Hadoop, so I believe that you have a good
> background in a distributed computing area.
> For the next step of this project, focused on a Summer, we plan to
> implement our proposed techniques on a distributed applications. The first
> phase of this would be implementing distributed machine learning
> techniques, such NN or SVM.
> Then we can analyze big data and design a learning model for our
> algorithms.
> 
> So please start writing your proposal, emphasize on extending ideas about
> implementing distributed machine learning techniques with HPX and
> targeting them for tying compiler and runtime techniques.
> The proposal should be submitted before deadline (April 3rd). So I would
> suggest you to give me a first draft earlier, so we can work together for
> its final submission.
> 
> 
> 
> Best Regards,
> 
> Zahra Khatami | PhD Student
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
> 2027 Digital Media Center (DMC)
> Baton Rouge, LA 70803
> 
> 
> On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN#
>  wrote:
> Hi Zahra,
> 
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
> 
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into
> "Applying machine learning techniques on HPX algorithms". The project
> seems interesting and I have had some background implementing Machine
> Learning algorithms on Hadoop, predominantly in Java. But I have been
> through the process of designing and optimizing algorithms for execution
> in parallel which I believe will be useful for this. Let me know how I can
> get started.
> 
> Best Regards,
> Madhavan
> 
> From: hpx-users-boun...@stellar.cct.lsu.edu [mailto:hpx-users-
> boun...@stellar.cct.lsu.edu] On Behalf Of Zahra Khatami
> Sent: Thursday, March 30, 2017 11:56 PM
> To: denis.bl

[hpx-users] GSOC 2017

2017-04-02 Thread Praveen Velliengiri
I have uploaded my final proposal in gsoc channels. Please view make some
suggestions.
Thank you
Praveenv
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2017 - Proposal Submission and further queries

2017-04-02 Thread Zahra Khatami
Aditya,

I have went through your proposal. As this applying process is competitive,
so I suggest you to include more details about your previous ML or data
mining course projects.

About the time line, you can include following subjects:
  -- studying about AI (artificial intelligent) concepts
  -- proposing methods for implementing AI learning
  -- applying AI methods and ML techniques for the distributed HPX
applications for predicting their parameters

It is not necessary for you to go through our code implementation, as we
can discuss about it later. However if you are interested, you can find
some of our work so far in this link:

https://github.com/zkhatami88/runtime-optimizations-using-compiler-and-machine-learning-algorithms

However, I suggest you to start working with HPX and running some of its
example just for warming up ;)


Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 6:49 PM, Aditya  wrote:

> Hi Zahra,
>
> I am interested in contributing to the project "Applying Machine Learning
> Techniques on HPX Parallel Algorithms". I was wondering if you could tell
> me the nitty gritty details about the project. Possibly, provide me links
> to the code that has already been written as part of this project so that I
> get up to speed.
>
> I do realize that I'm a bit late in making first contact, but I believe I
> can make a meaningful contribution in the next few weeks to improve my
> competence to contribute to this project. I have a good working knowledge
> of Machine Learning Algorithms as well as experience of working with
> distributed systems.
>
> This
> 
>  is
> the link to my proposal. It is incomplete at the moment. I still have to
> provide the tentative timeline. I guessed I'd be able to do that after
> speaking with you.
>
> Hoping to hear back from you soon.
>
> Thanks,
> Aditya
>
>
>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2017 - Proposal Submission and further queries

2017-04-02 Thread Aditya
Hi Zahra,

Thank you, I'll complete the proposal in an hour and let you know.

I have setup HPX and was able to run some examples on my PC, thanks to the
people on the IRC channel.

Regards,
Aditya




On Mon, Apr 3, 2017 at 8:02 AM, Zahra Khatami  wrote:

> Aditya,
>
> I have went through your proposal. As this applying process is
> competitive, so I suggest you to include more details about your previous
> ML or data mining course projects.
>
> About the time line, you can include following subjects:
>   -- studying about AI (artificial intelligent) concepts
>   -- proposing methods for implementing AI learning
>   -- applying AI methods and ML techniques for the distributed HPX
> applications for predicting their parameters
>
> It is not necessary for you to go through our code implementation, as we
> can discuss about it later. However if you are interested, you can find
> some of our work so far in this link:
>
> https://github.com/zkhatami88/runtime-optimizations-using-
> compiler-and-machine-learning-algorithms
>
> However, I suggest you to start working with HPX and running some of its
> example just for warming up ;)
>
>
> Best Regards,
>
> *Zahra Khatami* | PhD Student
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
> 2027 Digital Media Center (DMC)
> Baton Rouge, LA 70803
>
>
> On Sun, Apr 2, 2017 at 6:49 PM, Aditya  wrote:
>
>> Hi Zahra,
>>
>> I am interested in contributing to the project "Applying Machine
>> Learning Techniques on HPX Parallel Algorithms". I was wondering if you
>> could tell me the nitty gritty details about the project. Possibly,
>> provide me links to the code that has already been written as part of this
>> project so that I get up to speed.
>>
>> I do realize that I'm a bit late in making first contact, but I believe I
>> can make a meaningful contribution in the next few weeks to improve my
>> competence to contribute to this project. I have a good working knowledge
>> of Machine Learning Algorithms as well as experience of working with
>> distributed systems.
>>
>> This
>> 
>>  is
>> the link to my proposal. It is incomplete at the moment. I still have to
>> provide the tentative timeline. I guessed I'd be able to do that after
>> speaking with you.
>>
>> Hoping to hear back from you soon.
>>
>> Thanks,
>> Aditya
>>
>>
>>
>>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users