Yongkee,

> Thank you for the references. I will take a look at the benchmarks and the
> publications. Hartmut's talks at CppCon brought me to HPX and I'm just
> done with watching HPX tutorials in CSCS by Thomas and John. I do also
> thank you for great lectures and tutorials about HPX.
>
> While I will be trying to learn more from the benchmarks and publications,
> let me ask a bit more specific questions. First of all, is a coroutine
> implemented in HPX is just about same as C++ coroutine discussed in TS,
> which is stackless and relies solely on a compiler for transformations and
> optimizations, or is there anything more in HPX than that?

HPX can do both, stack-full and stack-less coroutines. The main HPX scheduler 
relies on stack-full coroutine, those however are not directly visible to the 
user. Those are disguised as HPX threads.

You can build stackl-ess coroutines in your code on top of that regardless.

We have tried to adapt co_await to our stack-full coroutines.

> Also, could any of you point out if there is any example with coroutines
> and active messages? I found a few with await but unfortunately
> fibonacci_await failed, as commented in CMakeList, with an
> exception( what(): abandoning not ready shared state: HPX(broken_promise)
> ). I also found transpose_await but haven't had a chance to run it.

Ok, thanks for letting us know. The co_await support in HPX is experimental at 
best and was tested (if at all) with MSVC only. And, it was a while back, so 
the spec might have changed and our implementation is not correct for 
compilers today anymore. What compiler do you use?

>  More examples are always better so please let me know if there is any
> more example for coroutines and active message.

As said, HPX does not directly expose coroutines, it uses a (stackfull) 
coroutine-like mechanism for scheduling its lightweight threads. Everything in 
HPX relies on essentially two scheduling functions:

Fire&forget: create a new thread and continue:

  hpx::apply([exec, ]f, args...);

Create a new thread getting future back for extracting the result:

  hpx::future<T> f = hpx::async([exec, ], f, args...);
  // ...
  T result = f.get();

Our co_await implementation wraps our hpx::future's

  hpx::future<T> f = hpx::async([exec, ], f, args...);
  // ...
  T result = co_await(f);

Active messages enter the picture if either the apply or the async above 
target a remote object.

HTH
Regards Hartmut
---------------
http://stellar.cct.lsu.edu
https://github.com/STEllAR-GROUP/hpx


>
> Thanks,
> Yongkee
>
> On Mon, Jan 28, 2019 at 12:26 AM Thomas Heller
> <mailto:thom.hel...@gmail.com> wrote:
> Hi Yongkee,
>
> In addition to the performance tests, we published a wide range of
> papers looking at the performance. Pleas have a look here:
> http://stellar-group.org/publications/
>
> On Sun, Jan 27, 2019 at 6:16 PM Hartmut Kaiser
> <mailto:hartmut.kai...@gmail.com> wrote:
> >
> > Hey Yongkee,
> >
> > Thanks for your interest in HPX!
> >
> > > While I was looking for programming model and runtime system which
> support
> > > both active messages and coroutines, I get to know HPX and now I am
> trying
> > > to learn it with nice tutorials.
> > >
> > > I haven't (and can't) decided yet whether I am going for HPX for my
> > > research yet since I am not so sure if HPX is competitive in terms of
> its
> > > runtime performance (or overhead) as compared to others, if any.
> > >
> > > For example, I am wondering what differences are between HPX
> coroutines
> > > and LLVM implementation with libc++, which is also getting to pretty a
> > > stable stage I believe. For active messages I am not much aware of
> others
> > > but I remember UPC or UPC++ is designed as PGAS language.
> > >
> > > HPX is still the best candidate for my research because it supports
> all
> > > fun features within the single framework. But before going further, it
> > > would be great for me to see any study about how much the runtime
> system
> > > is comparatively lightweight and scalable especially in terms of both
> > > features: active messages and coroutines.
> > >
> > > Please let me know if there is any prior study for me. Also any
> comment
> > > with regard to my concerns above would be greatly appreciated!
> >
> > We have a couple of benchmarks here:
> > https://github.com/STEllAR-
> GROUP/hpx/tree/master/tests/performance/local.
> > That's where you might be interested in starting your investigations.
> >
> > HTH
> > Regards Hartmut
> > ---------------
> > http://stellar.cct.lsu.edu
> > https://github.com/STEllAR-GROUP/hpx
> >
> >
> >
> >
> > _______________________________________________
> > hpx-users mailing list
> > mailto:hpx-users@stellar.cct.lsu.edu
> > https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
> _______________________________________________
> hpx-users mailing list
> mailto:hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


_______________________________________________
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to