Re: [hpx-users] [stellar-internals] C++ Lecture Series

2016-09-28 Thread Zahra Khatami
cool! I am following you in youtube :)

Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Wed, Sep 28, 2016 at 8:29 AM, Adrian Serio  wrote:

> All,
>
> Tomorrow, Thursday September 29th, at 1:30pm the STE||AR Group will
> present a public lecture in our C++ Lecture series. This presentation
> will focus on threading in HPX. The lecture will be given in room
> 1014DMC and will be recorded. Previous lectures can be viewed
> here:https://www.youtube.com/channel/UCymrreFOGmolvjYCL67Sx5w
>
> All are welcome to attend.
>
> --
> Adrian Serio
> Scientific Program Coordinator
> 2118 Digital Media Center
> 225.578.8506
>
> ___
> stellar-internals mailing list
> stellar-intern...@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/stellar-internals
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] [GSoC 2017] Proposal for re-implementing hpx::util::unwrapped

2017-03-30 Thread Zahra Khatami
Hi Denis,

I am so glad that you are interested in HPX GSOC.
I have looked at your github and your projects seems so interesting for me.
Feel free to write your proposal and submit it before April 3rd. I would be
happy to be your mentor, as I have found your background match with my
current projects as well. If you go through

https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2017-
Project-Ideas#re-implement-hpxutilunwrapped

You will find a project "Applying machine learning techniques on HPX
algorithms", which I think it could be a good fit for you too. Our team has
been working on it since 2-3 months ago and so far we have got interesting
results, which are going to be prepared for a conference paper. In this
project we are using LLVM and Clang LibTooling to implement a machine
learning techniques on an HPX parallel algorithm, and we have applied and
tested them on an hpx loop.
So as another option, you could look at this GSOC project idea and write a
brief proposal about how you can implement it.

Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Thu, Mar 30, 2017 at 10:55 AM, Patrick Diehl 
wrote:

> Hi Denis,
>
> the ides sounds good, for GSoC, you should submit your proposal at their
> official website. You can use this template [0] and our guidelines [1]
> to prepare your proposal.  The deadline for the submission is
>
> > April 3 16:00 UTC Student application deadline
>
> We are looking forward to review your proposal.
>
> Best,
>
> Patrick
>
> [0] https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-Submission-Template
>
> [1] https://github.com/STEllAR-GROUP/hpx/wiki/Hints-for-
> Successful-Proposals
>
> On 30/03/17 11:29 AM, Denis Blank wrote:
> > Hello HPX Developers,
> >
> > I'm Denis, an Informatics Student at the Technical University of Munich,
> Germany.
> >
> > In the summer semester, I'm transitioning to my master's program and
> > thus I will finally have enough time to have a chance on participating
> at GSoC.
> >
> > I'm very keen on Open-Source because you always learn something new
> about various topics
> > not covered in studies, and you get connected to other developers around
> the world.
> >
> > Thus I'm highly active on GitHub (https://github.com/Naios), also, I
> recently started
> > to attend various conferences such as the MeetingC++ or EuroLLVM.
> >
> > In my spare time, I'm working on side projects very related to the field
> HPX is covering,
> > like a library for compile-time known continuation chaining -
> continuable [0].
> >
> > I'm also a member of the TrinityCore [1] Open-Source project where I'm
> contributing
> > for 6 years now (beside of other projects like fmtlib or ANTLR).
> >
> > HPX is very attractive for me as a potential GSoC project,
> > because of its high-quality codebase as well as its impact on
> > the today's important infrastructure for parallel computing.
> >
> > During my work on previous side projects and contributions, I gathered
> > significant knowledge in C++ template meta-programming as well as
> designing good API's.
> > My bachelor's thesis was also about improving meta-programming in static
> languages.
> >
> > Thus I want to work on improving the API of HPX especially the
> `hpx::util::unwrapped` function.
> > While browsing the issue tracker I spotted other related issues,
> > not mentioned in the existing proposal, such as the requirement
> > of a unified waiter API for arbitrary objects (#1132 [2]).
> >
> > My plans for a potential GSoC stipend embrace a complete rewrite of the
> > `hpx::util::unwrapped` function, in order to use a new designed waiter
> > and unwrapping internal API which picks up the ideas mentioned in #1132,
> > to fully support the requirements of issue #1404, #1400 and #1126.
> > The API should also replace the existing internal solutions of:
> >
> >   - `dataflow`
> >   - `wait_all`
> >   - `when_all`
> >
> > in order to remove a lot of duplicated code (`when_all_frame` and
> `wait_all_frame`),
> > as well as to make the API consistent across these functions.
> > Also we could make the following mapping for the following parameter
> types available
> > to all functions I mentioned above:
> >
> >   - Args...  -> Args... (Ready types)
> >   - Tuple   -> Args... (Ready fixed-range)
> >   - hpx::future>  -> Args... (Waitable fixe

Re: [hpx-users] [GSoC 2017] Proposal for re-implementing hpx::util::unwrapped

2017-03-30 Thread zahra khatami
Hi Denis,

I am so glad that you are interested in HPX GSOC.
I have looked at your github and your projects seems so interesting for me.
Feel free to write your proposal and submit it before April 3rd. I would be
happy to be your mentor, as I have found your background match with my
current projects as well. If you go through

https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2017-Project-Ideas#re-implement-hpxutilunwrapped

You will find a project "Applying machine learning techniques on HPX
algorithms", which I think it could be a good fit for you too. Our team has
been working on it since 2-3 months ago and so far we have got interesting
results, which are going to be prepared for a conference paper. In this
project we are using LLVM and Clang LibTooling to implement a machine
learning techniques on an HPX parallel algorithm, and we have applied and
tested them on an hpx loop.
So as another option, you could look at this GSOC project idea and write a
brief proposal about how you can implement it.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Thu, Mar 30, 2017 at 10:29 AM, Denis Blank 
wrote:

> Hello HPX Developers,
>
> I'm Denis, an Informatics Student at the Technical University of Munich,
> Germany.
>
> In the summer semester, I'm transitioning to my master's program and
> thus I will finally have enough time to have a chance on participating at
> GSoC.
>
> I'm very keen on Open-Source because you always learn something new about
> various topics
> not covered in studies, and you get connected to other developers around
> the world.
>
> Thus I'm highly active on GitHub (https://github.com/Naios), also, I
> recently started
> to attend various conferences such as the MeetingC++ or EuroLLVM.
>
> In my spare time, I'm working on side projects very related to the field
> HPX is covering,
> like a library for compile-time known continuation chaining - continuable
> [0].
>
> I'm also a member of the TrinityCore [1] Open-Source project where I'm
> contributing
> for 6 years now (beside of other projects like fmtlib or ANTLR).
>
> HPX is very attractive for me as a potential GSoC project,
> because of its high-quality codebase as well as its impact on
> the today's important infrastructure for parallel computing.
>
> During my work on previous side projects and contributions, I gathered
> significant knowledge in C++ template meta-programming as well as
> designing good API's.
> My bachelor's thesis was also about improving meta-programming in static
> languages.
>
> Thus I want to work on improving the API of HPX especially the
> `hpx::util::unwrapped` function.
> While browsing the issue tracker I spotted other related issues,
> not mentioned in the existing proposal, such as the requirement
> of a unified waiter API for arbitrary objects (#1132 [2]).
>
> My plans for a potential GSoC stipend embrace a complete rewrite of the
> `hpx::util::unwrapped` function, in order to use a new designed waiter
> and unwrapping internal API which picks up the ideas mentioned in #1132,
> to fully support the requirements of issue #1404, #1400 and #1126.
> The API should also replace the existing internal solutions of:
>
>   - `dataflow`
>   - `wait_all`
>   - `when_all`
>
> in order to remove a lot of duplicated code (`when_all_frame` and
> `wait_all_frame`),
> as well as to make the API consistent across these functions.
> Also we could make the following mapping for the following parameter types
> available
> to all functions I mentioned above:
>
>   - Args...  -> Args... (Ready types)
>   - Tuple   -> Args... (Ready fixed-range)
>   - hpx::future>  -> Args... (Waitable fixed-range
> futures)
> > Where Tuple is an object that is unwrappable through a sequenced call
> > of std::get(tuple)..., which includes `std::pair`, `std::tuple`,
> > `hpx::tuple` and potentially `std::array`.
>   - Container>  -> Container
> > Where Container is an object satisfying the range requirements
> > (`begin()` and `end()`), which makes it possible to use
> > any arbitrary standard or user-given container.
>
> The new internal API could use function overloading instead of heavy
> SFINAE,
> so we can also slightly improve the build performance there (issued in
> #950 [3]).
>
> Because of my current knowledge I'm sure to complete these features,
> as well as appropriate unit-tests, in 2 months.
> Also since I've implemented similar ca

Re: [hpx-users] Applying machine learning techniques on HPX algorithms

2017-03-30 Thread Zahra Khatami
Hi Madhavan,

Thank you for your interest. I would be happy to work with you on this
project.
This project is mainly about combining machine learning techniques,
compiler optimizations and runtime methods, which is a new and super
interesting idea for our group at least ;)
We have implemented a major part in connecting these three areas together.
However, we have tested it on a single node, not on a distributed version.
As you have worked with Hadoop, so I believe that you have a good
background in a distributed computing area.
For the next step of this project, focused on a Summer, we plan to
implement our proposed techniques on a distributed applications. The first
phase of this would be implementing distributed machine learning
techniques, such NN or SVM.
Then we can analyze big data and design a learning model for our
algorithms.

So please start writing your proposal, emphasize on extending ideas about
implementing distributed machine learning techniques with HPX and targeting
them for tying compiler and runtime techniques.
The proposal should be submitted before deadline (April 3rd). So I would
suggest you to give me a first draft earlier, so we can work together for
its final submission.


Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN# <
madhavan...@e.ntu.edu.sg> wrote:

> Hi Zahra,
>
>
>
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
>
>
>
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into 
> "Applying
> machine learning techniques on HPX algorithms". The project seems
> interesting and I have had some background implementing Machine Learning
> algorithms on Hadoop, predominantly in Java. But I have been through the
> process of designing and optimizing algorithms for execution in parallel
> which I believe will be useful for this. Let me know how I can get started.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* hpx-users-boun...@stellar.cct.lsu.edu [mailto:hpx-users-bounces@
> stellar.cct.lsu.edu ] *On Behalf
> Of *Zahra Khatami
> *Sent:* Thursday, March 30, 2017 11:56 PM
> *To:* denis.bl...@outlook.com
> *Cc:* hpx-users@stellar.cct.lsu.edu
> *Subject:* Re: [hpx-users] [GSoC 2017] Proposal for re-implementing
> hpx::util::unwrapped
>
>
>
> Hi Denis,
>
>
>
> I am so glad that you are interested in HPX GSOC.
>
> I have looked at your github and your projects seems so interesting for
> me. Feel free to write your proposal and submit it before April 3rd. I
> would be happy to be your mentor, as I have found your background match
> with my current projects as well. If you go through
>
>
>
> https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-2017-
> Project-Ideas#re-implement-hpxutilunwrapped
>
>
>
> You will find a project "Applying machine learning techniques on HPX
> algorithms", which I think it could be a good fit for you too. Our team has
> been working on it since 2-3 months ago and so far we have got interesting
> results, which are going to be prepared for a conference paper. In this
> project we are using LLVM and Clang LibTooling to implement a machine
> learning techniques on an HPX parallel algorithm, and we have applied and
> tested them on an hpx loop.
>
> So as another option, you could look at this GSOC project idea and write a
> brief proposal about how you can implement it.
>
>
> Best Regards,
>
> * Zahra Khatami* | PhD Student
>
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
>
> 2027 Digital Media Center (DMC)
>
> Baton Rouge, LA 70803
>
>
>
>
>
> On Thu, Mar 30, 2017 at 10:55 AM, Patrick Diehl 
> wrote:
>
> Hi Denis,
>
> the ides sounds good, for GSoC, you should submit your proposal at their
> official website. You can use this template [0] and our guidelines [1]
> to prepare your proposal.  The deadline for the submission is
>
> > April 3 16:00 UTC Student application deadline
>
> We are looking forward to review your proposal.
>
> Best,
>
> Patrick
>
> [0] https://github.com/STEllAR-GROUP/hpx/wiki/GSoC-Submission-Template
>
> [1] https://github.com/STEllAR-GROUP/hpx/wiki/Hints-for-
> Successful-Proposals
>
>
> On 30/03/17 11:29 AM, Denis Blank wrote:
> > Hello HPX Developers,
> >
> > I'm Denis, an Informatics Student at the Technical University of Munic

Re: [hpx-users] Information regarding projects

2017-04-02 Thread Zahra Khatami
Hi Aditya,

Thank you for your interest in the machine learning project. As Dr. Kaiser
explained, a compiler gathers static information for ML, then ML will
select the parameters, such as chunk sizes, for HPX's techniques, such as
loop.
We have worked on this project since a couple of months ago, and so far we
have got interesting results from our implementation.
Our focus in the Summer is to implement our technique on a distributed
applications.
So if you have a background in ML and distributed computing, it would be
enough to work on this topic.
I am pretty sure that this phase will result in a conference paper as its
new and super interesting ;)
So if you are interested in this project, go ahead and write your proposal
before its deadline.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 7:04 AM, Hartmut Kaiser 
wrote:

> Hey Aditya,
>
> > It would be great if some of you could guide me through the project
> > selection phase so that I can make my proposal as soon as possible and
> get
> > it reviewed too.
>
> The machine learning project aims at using ML techniques to select runtime
> parameters based on information collected at compile time. For instance in
> order to decide whether to parallelize a particular loop the compiler looks
> at the loop body and extracts certain features, like the number of
> operations or the number of conditionals etc. It conveys this information
> to the runtime system through generated code. The runtime adds a couple of
> dynamic parameters like number of requested iterations and feeds this into
> a ML model to decide whether to run the loop in parallel or not. We would
> like to support this with a way for the user to be able to automatically
> train the ML model on his own code.
>
> I can't say anything about the Lustre backend, except that Lustre is a
> high-performance file system which we would like to be able to directly
> talk to from HPX. If you don't know what Lustre is this is not for you.
>
> All to All communications is a nice project, actually. In HPX we sorely
> need to implement a set of global communication patterns like broadcast,
> allgather, alltoall etc. All of this is well known (see MPI) except that we
> would like to adapt those to the asynchronous nature of HPX.
>
> HTH
> Regards Hartmut
> ---
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
>
>
> >
> > Regards,
> > Aditya
> >
> >
> >
> > On Sun, Apr 2, 2017 at 5:21 AM, Aditya  wrote:
> > Hello again,
> >
> > It would be great if someone shed light on the below listed projects too
> >
> > 1. Applying Machine Learning Techniques on HPX Parallel Algorithms
> > 2. Adding Lustre backend to hpxio
> > 3. All to All Communications
> >
> > I believe I will be suitable for projects 2 and 3 (above). As part of my
> > undergrad thesis (mentioned in the earlier email) I worked with Lustre
> > briefly (we decided, lustre was an overkill for our scenario as we'd have
> > to re organize data among nodes even after the parallel read). I have
> > worked with MPI on several projects (my thesis and projects in the
> > parallel computing course) and have a basic understanding of all to all
> > communications work.
> >
> > If someone could explain what would be involved in project 1, it'd be
> > great.
> >
> > Also, please let me know what is expected of the student in projects 2
> and
> > 3.
> >
> > Thanks again,
> > Aditya
> >
> >
>
>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] FW: Applying machine learning techniques on HPX algorithms

2017-04-02 Thread Zahra Khatami
Hi Madhavan,

Our focus in the Summer is to implement our technique on a distributed
applications. This ML algorithms are a basic ML algorithms, which we have
implemented logistic regression model so far.
We have introduces a new ClangTool which make runtime technique, such as
for_each, to implement ML algorithms for selecting its parameters, such as
chunk size.

Madhavan, as I remember you told me that you have already submit a proposal
for another project, am I right?
And far as I know, a student cannot work on more than one project, and s/he
will not be paid for more than one.
So, I am not sure if you can work on two projects at the same time.



Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 1:10 AM, #SESHADRI MADHAVAN# <
madhavan...@e.ntu.edu.sg> wrote:

> Hi Zahra,
>
>
>
> I had a brief look at the code. Can I understand the direction in which
> you want to proceed with the project?
>
>
>
> Currently, I see that a few basic algorithms have been implemented over
> the hpx framework. So, am I right to assume that more basic algorithms are
> to be implemented on top of the hpx framework?
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> *From:* #SESHADRI MADHAVAN#
> *Sent:* Friday, March 31, 2017 12:34 AM
> *To:* 'Zahra Khatami' ;
> hpx-users@stellar.cct.lsu.edu
> *Subject:* RE: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Zara,
>
>
>
> I have already submitted a proposal for HPX (HPXCL), so I won’t be
> submitting for this one.
>
>
>
> But I shall chip in contribution for this one as well as I find this to be
> an interesting area. My summer is currently free, so I shouldn’t have a
> problem in contributing to this in addition to HPXCL. I will begin by
> taking a look at the code base [1] and shall discuss further with you.
>
>
>
> Best Regards,
>
> Madhavan
>
>
>
> [1] https://github.com/STEllAR-GROUP/hpxML
>
>
>
> *From:* Zahra Khatami [mailto:z.khatam...@gmail.com
> ]
> *Sent:* Friday, March 31, 2017 12:25 AM
> *To:* hpx-users@stellar.cct.lsu.edu; #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg>
> *Subject:* Re: [hpx-users] Applying machine learning techniques on HPX
> algorithms
>
>
>
> Hi Madhavan,
>
>
>
> Thank you for your interest. I would be happy to work with you on this
> project.
>
> This project is mainly about combining machine learning techniques,
> compiler optimizations and runtime methods, which is a new and super
> interesting idea for our group at least ;)
>
> We have implemented a major part in connecting these three areas together.
> However, we have tested it on a single node, not on a distributed version.
>
> As you have worked with Hadoop, so I believe that you have a good
> background in a distributed computing area.
>
> For the next step of this project, focused on a Summer, we plan to
> implement our proposed techniques on a distributed applications. The first
> phase of this would be implementing distributed machine learning
> techniques, such NN or SVM.
>
> Then we can analyze big data and design a learning model for our
> algorithms.
>
>
>
> So please start writing your proposal, emphasize on extending ideas about
> implementing distributed machine learning techniques with HPX and targeting
> them for tying compiler and runtime techniques.
>
> The proposal should be submitted before deadline (April 3rd). So I would
> suggest you to give me a first draft earlier, so we can work together for
> its final submission.
>
>
>
>
> Best Regards,
>
> * Zahra Khatami* | PhD Student
>
> Center for Computation & Technology (CCT)
> School of Electrical Engineering & Computer Science
> Louisiana State University
>
> 2027 Digital Media Center (DMC)
>
> Baton Rouge, LA 70803
>
>
>
>
>
> On Thu, Mar 30, 2017 at 11:13 AM, #SESHADRI MADHAVAN# <
> madhavan...@e.ntu.edu.sg> wrote:
>
> Hi Zahra,
>
>
>
> Sorry, for high jacking the previous email thread, changed the subject in
> this one.
>
>
>
> I have proposed the idea for working on HPXCL with Patrick, hence I shall
> not be proposing this as my GSoC project, but I would love to jump into 
> "Applying
> machine learning techniques on HPX algorithms". The project seems
> interesting and I have had some background implementing Machine Learning
> algorithms on Hadoop, predominantly in Java. But I have been through the
> process of designing and optimizing algo

Re: [hpx-users] GSoC 2017 - Proposal Submission and further queries

2017-04-02 Thread Zahra Khatami
Aditya,

I have went through your proposal. As this applying process is competitive,
so I suggest you to include more details about your previous ML or data
mining course projects.

About the time line, you can include following subjects:
  -- studying about AI (artificial intelligent) concepts
  -- proposing methods for implementing AI learning
  -- applying AI methods and ML techniques for the distributed HPX
applications for predicting their parameters

It is not necessary for you to go through our code implementation, as we
can discuss about it later. However if you are interested, you can find
some of our work so far in this link:

https://github.com/zkhatami88/runtime-optimizations-using-compiler-and-machine-learning-algorithms

However, I suggest you to start working with HPX and running some of its
example just for warming up ;)


Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Sun, Apr 2, 2017 at 6:49 PM, Aditya  wrote:

> Hi Zahra,
>
> I am interested in contributing to the project "Applying Machine Learning
> Techniques on HPX Parallel Algorithms". I was wondering if you could tell
> me the nitty gritty details about the project. Possibly, provide me links
> to the code that has already been written as part of this project so that I
> get up to speed.
>
> I do realize that I'm a bit late in making first contact, but I believe I
> can make a meaningful contribution in the next few weeks to improve my
> competence to contribute to this project. I have a good working knowledge
> of Machine Learning Algorithms as well as experience of working with
> distributed systems.
>
> This
> <https://docs.google.com/document/d/1D4LetvgLeTZHZswih7KEd5rgJfstMacE2zGI_KXJr7M/edit?usp=sharing>
>  is
> the link to my proposal. It is incomplete at the moment. I still have to
> provide the tentative timeline. I guessed I'd be able to do that after
> speaking with you.
>
> Hoping to hear back from you soon.
>
> Thanks,
> Aditya
>
>
>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2017 - Proposal Submission and further queries

2017-04-03 Thread Zahra Khatami
Hi Aditya,

Thank you for your proposal and good luck ;) We will find out about your
application after its final review.
However, if you are interested you can start reading publications regards
to implementing AI for online learning.
It would be great if you catch some ideas about this implementation
(preferably in c++).
Then, if you were selected for this GSoC program, we can start implementing
it on an HPX algorithms for improving their performance.

Best Regards,

*Zahra Khatami* | PhD Student
Center for Computation & Technology (CCT)
School of Electrical Engineering & Computer Science
Louisiana State University
2027 Digital Media Center (DMC)
Baton Rouge, LA 70803


On Mon, Apr 3, 2017 at 1:02 PM, Aditya  wrote:

> Hi Zahra,
>
> I have successfully submitted the proposal in the GSoC portal. Now that
> I'm done with the application process, I am looking forward to making some
> contribution to HPX. It would great if you could give a starting point such
> as some reading material I could follow / any issue that I could contribute
> to.
>
> Eagerly awaiting your reply.
>
> Regards,
> Aditya
>
>
>
>
> On Mon, Apr 3, 2017 at 1:57 PM, Aditya  wrote:
>
>> Hi again Zahra,
>>
>> I have shared the final proposal with STE||AR from the GSoC portal. This
>> <https://docs.google.com/document/d/1D4LetvgLeTZHZswih7KEd5rgJfstMacE2zGI_KXJr7M/edit?usp=sharing>
>>  is
>> the Google Drive link for the same. I hope it meets the expectations of the
>> community. I am eagerly looking forward to feedback from you so and modify
>> the proposal as suggested.
>>
>> Also, This
>> <https://drive.google.com/file/d/0B9FC-UHNNCpZUGlQZ0cyTWNCU1U/view?usp=sharing>
>>  is
>> a link to my CV. I hope my proposal and CV persuade you to choose me for
>> the project.
>>
>> Looking forward to being a part of the STE||AR Community.
>>
>> Best Regards,
>> Aditya
>>
>>
>>
>>
>> On Mon, Apr 3, 2017 at 8:10 AM, Aditya  wrote:
>>
>>> Hi Zahra,
>>>
>>> Thank you, I'll complete the proposal in an hour and let you know.
>>>
>>> I have setup HPX and was able to run some examples on my PC, thanks to
>>> the people on the IRC channel.
>>>
>>> Regards,
>>> Aditya
>>>
>>>
>>>
>>>
>>> On Mon, Apr 3, 2017 at 8:02 AM, Zahra Khatami 
>>> wrote:
>>>
>>>> Aditya,
>>>>
>>>> I have went through your proposal. As this applying process is
>>>> competitive, so I suggest you to include more details about your previous
>>>> ML or data mining course projects.
>>>>
>>>> About the time line, you can include following subjects:
>>>>   -- studying about AI (artificial intelligent) concepts
>>>>   -- proposing methods for implementing AI learning
>>>>   -- applying AI methods and ML techniques for the distributed HPX
>>>> applications for predicting their parameters
>>>>
>>>> It is not necessary for you to go through our code implementation, as
>>>> we can discuss about it later. However if you are interested, you can find
>>>> some of our work so far in this link:
>>>>
>>>> https://github.com/zkhatami88/runtime-optimizations-using-co
>>>> mpiler-and-machine-learning-algorithms
>>>>
>>>> However, I suggest you to start working with HPX and running some of
>>>> its example just for warming up ;)
>>>>
>>>>
>>>> Best Regards,
>>>>
>>>> *Zahra Khatami* | PhD Student
>>>> Center for Computation & Technology (CCT)
>>>> School of Electrical Engineering & Computer Science
>>>> Louisiana State University
>>>> 2027 Digital Media Center (DMC)
>>>> Baton Rouge, LA 70803
>>>>
>>>>
>>>> On Sun, Apr 2, 2017 at 6:49 PM, Aditya 
>>>> wrote:
>>>>
>>>>> Hi Zahra,
>>>>>
>>>>> I am interested in contributing to the project "Applying Machine
>>>>> Learning Techniques on HPX Parallel Algorithms". I was wondering if you
>>>>> could tell me the nitty gritty details about the project. Possibly,
>>>>> provide me links to the code that has already been written as part of this
>>>>> project so that I get up to speed.
>>>>>
>>>>> I do realize that I'm a bit late in making first contact, but I
>>>>> believe I can make a meaningful contribution in the next few weeks to
>>>>> improve my competence to contribute to this project. I have a good working
>>>>> knowledge of Machine Learning Algorithms as well as experience of working
>>>>> with distributed systems.
>>>>>
>>>>> This
>>>>> <https://docs.google.com/document/d/1D4LetvgLeTZHZswih7KEd5rgJfstMacE2zGI_KXJr7M/edit?usp=sharing>
>>>>>  is
>>>>> the link to my proposal. It is incomplete at the moment. I still have to
>>>>> provide the tentative timeline. I guessed I'd be able to do that after
>>>>> speaking with you.
>>>>>
>>>>> Hoping to hear back from you soon.
>>>>>
>>>>> Thanks,
>>>>> Aditya
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSoC 2018, on "applying machine learning technques ..." project

2018-02-19 Thread Zahra Khatami
Hi Ray,

If you refer to the published paper, you could get more information.
Generally talking, this project uses compiler and runtime system to gather
both static and dynamic information to set HPX algorithm parameters such as
chunk sizes efficiently. Static information are gathered by a compiler,
which we used clang and we developed a new class for clang for this
purpose. Dynamic information are gathered by new HPX policies that we
developed for this purpose. You can look at the example in HPXML in HPX
GitHub.

Thanks,
Zahra

On Mon, Feb 19, 2018 at 9:04 AM 김규래  wrote:

> Hi Adrian,
>
> Thanks for clarifying.
>
> I think I pretty much get the picture.
>
>
>
> Looking forward to get in touch with Patrick in IRC within this week.
>
>
>
> Thanks everyone.
>
>
>
> msca8h at naver dot com
>
> msca8h at sogang dot ac dot kr
>
> Ray Kim
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] GSOC 2018, question about "smart executors" paper

2018-02-21 Thread Zahra Khatami
Hi Ray,

In my research, these parameters are also heuristicaly found. Basically we
tested our framework on hpx for-each using different selected chunk sizes
each time. These loops had different parameters ( static and dynamic) which
reacted differently for those chunk size candidates. Then, we determined
which chunk size resulted in better performance on each of those loops.
That’s how we collected our training data, which we trained our model using
them. You can find training data in HPXML on hpx GitHub.

Thanks,
Zahra,

On Wed, Feb 21, 2018 at 4:21 AM 김규래  wrote:

> Hi Zahra,
> I've read your amazong paper for quite a while.
> There's one thing I cannot find answers.
>
> What were the label data that the models were trained on?
> I cannot find explanation about how 'optimal chunk size' and 'optimal
> prefetching distance' labels were collected.
>
> Previous work mostly states heuristically found labels.
> In the case of your paper, how was this done?
>
> My respects.
> msca8h at naver dot com
> msca8h at sogang dot ac dot kr
> Ray Kim
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Generating Data for HPX smart executors

2018-02-21 Thread Zahra Khatami
Gabriel,

I am not sure if I understand your concern correctly. The optimal
parameters ( chunk size, preferching distance or policies) shouldn’t be
found before training data. They are found for each of the hpx loops at
runtime based on the loop static and dynamic parameters. That’s a main goal
of this research. The candidates of these optimal parameters are chosen
when training model. Then the optimal one will be selected between them at
runtime, which may be different for each loop with different parameters.

Thanks,
Zahra,

On Tue, Feb 20, 2018 at 7:51 AM Gabriel Laberge 
wrote:

> Hi,
> I had a questions on the way data was generated in order to train the
> logistics regressions models talked about in [0]
> https://arxiv.org/pdf/1711.01519.pdf
> For each of the training examples, the optimal execution
> policies,chunk sizes and prefetching distance had to be found before
> the training process in order to have good data. I wonder if the
> optimal parameters for the training examples were found by trial and
> error or if there is another technique.
> Thank you..
>
>
>
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX smart executors questions

2018-02-21 Thread Zahra Khatami
Hi Gabriel,

Thanks for your interest in this project.
Logistic regression model was chosen since it was implemented in similar
projects before. But this project should easily work with other learning
models too. Binary regression model was chosen for selecting optimum
policy, since the Target was to chose earthier sequential or parallel as a
policy (0 or 1 -> binary) . For chunk sizes or prefetching distance, the
optimum parameter was chosen between more than two candidates, that’s why
we used multinomial regression model.
About your last question, do you mean using one training data and one
training model to choose chunk sizes, prefetching distances and policies
together? I don’t think that’s a good idea, since each of them has
different candidates and needs totally different training data.

Thanks,
Zahra

On Mon, Feb 19, 2018 at 3:44 PM Gabriel Laberge 
wrote:

>
> Hi
> I'm Gabriel Laberge and i'm interested in doing the ""Applying Machine
> Learning Techniques on HPX Parallel Algorithms"" project. I'm quite
> new to machine learning but I expect to learn a lot during the
> project.  I had a few questions to ask you about the HPX smart
> executors from reading the article.
>
>
> First of, Why were logistic regression chosen over other method that
> you cited in the article (NN,SVM and decision tree). Would it be
> possible to implement those methods in one compilation?
>
> Secondly, I was wondering why you used a binary regression to chose
> between sequential and parrallel algorithms and you used Multinomial
> regression to choose the chuck size and prefetching distance. Would
> there be a possibility to use only one regression to choose all 3
> parameters?
>
> Thank you for your time.
> Gabriel.
>
> --
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Data for HPX machine learning

2018-02-27 Thread Zahra Khatami
Hi Gabriel,

About your cat example, you understand machine learning concepts correctly
and you are in a right direction!
About your question, when collecting training data, we know which policy is
the optimum one for the that hpx loop for example. But how we know that?
Actually we run that test loop twice, first using sequential policy and in
the second run using parallel policy, and by comparing their execution time
we can find out that which policy worked better for that specific loop. So
we collected training data like this!
For testing, we use those training data to predict the policy of completely
new hpx loop. Imagine that after executing our learning model, it’s policy
was chosen to be parallel for that loop. To be sure that if our model was
right, we run that new loop again by setting it’s policy  to be sequential.
That’s how we determine the accuracy of our model. If you look at the
comparison results in our paper, you will find out that we compare the
results of our learning model with the one that their policies ( our chunk
sizes or prefetching distances) was set manually with different candidates.

Thanks,
Zahra

On Tue, Feb 27, 2018 at 7:39 PM Gabriel Laberge 
wrote:

> Hi,
> I think the last time I asked you this question I wasn't very clear so
>   will clarify. This could be due to the fact that i'm new to machine
> learning.
>
>  From my understanding, if you want to train a binary logistic
> regression, you need a matrix Xdata which represent training features
> and you need Ydata which is a vector representing the expected output
> (0 or 1) for each of the training examples. For example, if I want to
> train a regression to identify cat pictures, I need Xdata which
> represents a set of 'm' cat pictures and Ydata which is a vector of 0
> and 1 if the corresponding picture is a cat.
>
> I assume the same logic applies with a logistic regression used to
> find the execution policy (seq or par)
> Xdata represents the inputs features for the examples and Ydata
> represent the expected output (seq or par) for each of the training
> examples.
>
>  From our previous discussion, It seemed like you told me that the
> data was only constituted of input features (Xdata) which would mean
> that we don't know the optimal policy on our data set. If this is the
> case, I simply don't understand how you can train a regression.
>
> My analogy would be that you can't train a regression to recognize cat
> pictures if you don't tell it which one are carts and which aren't.
>
> Could you tell me if I'm wrong?
>
> If I'm right then I wonder how do you find the optimal policy for each
> of the training examples.
>
> Thank you very much.
> Gabriel.
>
>
> --
Best Regards, Zahra Khatami | PhD Student Center for Computation &
Technology (CCT) School of Electrical Engineering & Computer Science
Louisiana State University 2027 Digital Media Center (DMC) Baton Rouge, LA
70803
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Continuous regressions instead of multi-class classification

2018-03-06 Thread Zahra Khatami
Hi Gabriel,

Sorry for my late responses, as I am not at university anymore and I am
working at Oracle so I am kind of busy here with my ongoing projects. But I
would like to help you understand the concepts as much as possible. However
you could also count on Patrick and also Dr. Kaiser if you found me not
easy to find ;)
About your question, yes you can choose different candidates values and
even different amounts of candidates. But they should be chosen wisely. You
can do tests on your chosen candidates to see how your learning model acts
using them. If random candidates are chosen, your model will not result as
expected!
You are also free to choose different learning model.
However our main focus in this project is to use online machine learning
methods. Such that it will not required to collect offline data anymore. So
the system learns as it receives new data through time. But for the first
phase of this project please continue working and studying different
learning models and may trying with different candidates.
If it’s possible and you have time, you can also start writing your
proposal, so we could finalize it together. It’s the best way to clarify
the steps of this project.

Thanks,
Zahra

On Fri, Mar 2, 2018 at 4:46 PM Gabriel Laberge 
wrote:

> Hi,
> I ask you many question as of recently but that's because I'm really
> getting invested in the project.
>
>
>So,the multinomial regressions used to find the optimal chunk
> size and prefetching distance is in actuality a multi-class
> classification algorithm.But, I wanted to ask you if chunk size and
> prefetching distance could be interpreted as continuous variables. For
> example, your regression can only return chunk size of 0.1% 1% 10% and
> 50%. But do chunk sizes of 34%,23% or 56% make sense? I'm not sure.
> Also for prefetching distance, the values used are 10 50 100 1000
> 5000. But would values of 543,23 or 4851 also make sense?
>
>If yes, I believe those variables could be treated as 'almost'
> continuous variables so a regression algorithm could be used instead
> of a classification one. This would allow to get more precise values
> since the regression could (in the case of prefetching distance)
> output any integer between 0 and 5000. Such a regression could be
> compared to the multinomial regression model in order to find out if
> such a precision is really required.
>
> Thank you very much.
> Gabriel.
>
>
> --
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX Smart executors for_each and for_loop

2018-03-13 Thread Zahra Khatami
Hi Gabriel,

They are tested on HPX for loop.

Zahra

On Tue, Mar 13, 2018 at 9:27 AM Gabriel Laberge 
wrote:

> Hi,
> in the article [0] http://stellar.cct.lsu.edu/pubs/khatami_espm2_2017.pdf
> smart executors adaptive_chunk_size and make_prefetcher_distance are
> tested as execution policies on for_each loops. I was wondering if
> they also work as execution policies of HPX's for_loop.
>
> Thank you.
> Gabriel.
>
>
>
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] HPX Smart executors for_each and for_loop

2018-03-15 Thread Zahra Khatami
For HPX loop we have HPX::for_each , so by using namespace HPX in your
code, you can just simply use for_each, right?

Zahra

On Thu, Mar 15, 2018 at 8:39 AM Patrick Diehl 
wrote:

> Sometimes the documentation is not aligned with the current version of
> the code and things are missing there.
>
> On 14/03/18 08:26 PM, Gabriel Laberge wrote:
> > Also I wonder, why the smart executors make_prefetcher_policy and
> > adaptive_chunk_size are not present on the list of execution policies
> > [1]
> https://stellar-group.github.io/hpx/docs/html/hpx/manual/parallel/executor_parameters.html
> >
> > Gabriel Laberge  a écrit :
> >
> >> Ok thank you very much.
> >> But I'm wondering why for_each loops are used in the article?
> >> When code is shown in the article smart executors are used as
> >> execution policies in for_each loops.
> >> Is that something that was previously implemented?
> >> Zahra Khatami  a écrit :
> >>
> >>> Hi Gabriel,
> >>>
> >>> They are tested on HPX for loop.
> >>>
> >>> Zahra
> >>>
> >>> On Tue, Mar 13, 2018 at 9:27 AM Gabriel Laberge <
> gabriel.labe...@polymtl.ca>
> >>> wrote:
> >>>
> >>>> Hi,
> >>>> in the article [0]
> http://stellar.cct.lsu.edu/pubs/khatami_espm2_2017.pdf
> >>>> smart executors adaptive_chunk_size and make_prefetcher_distance are
> >>>> tested as execution policies on for_each loops. I was wondering if
> >>>> they also work as execution policies of HPX's for_loop.
> >>>>
> >>>> Thank you.
> >>>> Gabriel.
> >>>>
> >>>>
> >>>>
> >>>> ___
> >>>> hpx-users mailing list
> >>>> hpx-users@stellar.cct.lsu.edu
> >>>> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >>>>
> >>> --
> >>> Best Regards,
> >>>
> >>> *Zahra Khatami* | Member of Technical Staff
> >>> Virtual OS
> >>> Oracle
> >>> 400 Oracle Parkway
> >>> Redwood City, CA 94065
> >>
> >>
> >>
> >> ___
> >> hpx-users mailing list
> >> hpx-users@stellar.cct.lsu.edu
> >> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >
> >
> >
> > ___
> > hpx-users mailing list
> > hpx-users@stellar.cct.lsu.edu
> > https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> >
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
-- 
Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


[hpx-users] Full Time Positions at Oracle

2018-11-19 Thread Zahra Khatami
Hello HPX users!

There are several full time open positions at database group at Oracle (San
Francisco, Bay Area) for students graduating Summer 2019. If you are
interested, please send me your resume.

Best Regards,

*Zahra Khatami* | Member of Technical Staff
Virtual OS
Oracle
400 Oracle Parkway
Redwood City, CA 94065
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users