Re: [hpx-users] Strong scalability of hpx dataflow and async

2017-10-11 Thread Steve Petruzza
Thanks Hartmut, it all makes sense now.


> On 11 Oct 2017, at 14:51, Hartmut Kaiser  wrote:
> 
> 
>>> I think I’ve found a workaround.
>>> 
>>> If I use a typedef as following:
>>> 
>>> typedef std::vector vec_char;
>>> 
>>> HPX_REGISTER_CHANNEL(vec_char);
>>> 
>>> It works, but if I try to use directly:
>>> HPX_REGISTER_CHANNEL(std::vector)
>>> 
>>> this gives me the error I reported before.
>>> The issue might be in the expansion of the macro HPX_REGISTER_CHANNEL.
>> 
>> Yes, that confirms my suspicion. I will have a looks what's going on.
> 
> Doh! The problem is that the (literal) parameter you give to the macro has to 
> conform to the rules of a valid symbol name, i.e. no special characters are 
> allowed (no '<', '>', etc.). Sorry, this has to be documented properly 
> somewhere, and I forgot to mention it in the first place.
> 
> The 'workaround' you propose is the only way to circumvent problems. There is 
> nothing we can do about it.
> 
> Also, wrt your comment that everything is working if you use 
> hpx::lcos::local::channel instead - this is not surprising. The local channel 
> type is representing a channel which can be used inside a given locality only 
> (no remote operation, just inter-thread/inter-task communication), hence its 
> name. Those channels don't require the use of the ugly macros, thus there is 
> no problem.
> 
> HTH
> Regards Hartmut
> ---
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
>> 
>> Thanks!
>> Regards Hartmut
>> ---
>> http://boost-spirit.com
>> http://stellar.cct.lsu.edu
>> 
>> 
>>> 
>>> Steve
>>> 
>>> 
>>> 
>>> 
>>> On 10 Oct 2017, at 18:38, Steve Petruzza  wrote:
>>> 
>>> Sorry, regarding the version that I am using it is the commit of your
>>> split_future for vector:
>>> 
>>>Adding split_future for std::vector
>>> 
>>>- this fixes #2940
>>> 
>>> commit 8ecf8197f9fc9d1cd45a7f9ee61a7be07ba26f46
>>> 
>>> Steve
>>> 
>>> 
>>> 
>>> On 10 Oct 2017, at 18:33, Steve Petruzza  wrote:
>>> 
>>> hpx::find_here()
>>> 
> 
> 

___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] Strong scalability of hpx dataflow and async

2017-10-11 Thread Hartmut Kaiser
Kilian,

> following your discussion about remote dependencies
> between futures has left me with two questions:
> 
> 1.) If we pass a future as an argument to a remote
> function call, is the actual call of the function delayed,
> until the future becomes ready?
> 
> So does:
> 
> hpx::future arg = hpx::async();
> hpx::async(REMOTE_LOC, arg);
> 
> Equals:
> 
> hpx::future f= hpx::async();
> hpx::async(REMOTE_LOC, f.get());

Yes, but only for remote targets. For local targets this is not true. I 
wouldn't rely on it, though. This is an implementation detail which may change 
at any time without notice.

> 2.) When we encountered a similar problem, our solution
> was to aggregate the futures (of tasks A) inside AGAS
> adsressed components, distribute those addresses and
> referencing remote (and local) futures our dataflows
> dependet on as member variables of these components.
> How would such a solution compare to the channel based
> approach with respect to performance and scalability?

Not sure, only a test case and some measurements can tell. Please let us know 
if you find out something interesting.

Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu


> 
> Thanks,
> 
> Kilian
> 
> On Mon, 9 Oct 2017 17:41:54 -0600
>   Steve Petruzza  wrote:
> > Thank you Hartmut,
> >
> > Your suggestions are already very useful. This channels
> >mechanism looks awesome, I will give it a try.
> >
> > One other thing, where I can actually give you a code
> >example, is the following:
> > - an async function returns a future of a vector
> > - I need to dispatch the single elements of this vector
> >as separate futures, cause those will be used
> >(separately) by other async functions
> >
> > Here is what I am doing right now:
> >
> > hpx::future out_v =
> >hpx::dataflow(exe_act, locality, inputs);
> >
> > std::vector
> >outputs_fut(out_count);
> >
> > for(int i=0; i < out_count; i++){
> >  outputs_fut[i] = hpx::dataflow(
> > [i, _v]() -> Something
> > {
> >   return out_v.get()[i];
> >}
> >   );
> > }
> >
> > This solution works but I think that the loop is just
> >creating a bunch of useless async calls just to take out
> >one of the elems as a single future.
> >
> > Is there a better way of doing this? Basically to pass
> >from a future to a vector in HPX?
> >
> > Thank you,
> > Steve
> >
> > p.s.: I also tried to use an action which runs on the
> >same locality for the second dataflow.
> >
> >> On 9 Oct 2017, at 16:56, Hartmut Kaiser
> >> wrote:
> >>
> >> Steve,
> >>
> >>> The number of cores per node is 32, so the 8 threads * 4
> >>>cores should be
> >>> fine (I tried many variants anyway).
> >>>
> >>> The SPMD implementation seems like the way to go, but
> >>>after I split my
> >>> tasks into different localities how can I express data
> >>>dependencies
> >>> between them?
> >>>
> >>> Let’s say that I have tasks 0-10 in locality A and tasks
> >>>11-21 in locality
> >>> B. Now, the task 15 (in locality B) requires some data
> >>>produced by task 7
> >>> (in locality A).
> >>>
> >>> Should I encode these data dependencies in terms of
> >>>futures when I split
> >>> the tasks into the two localities?
> >>
> >> Yes, either send the future over the wire (which might
> >>have surprising effects as we wait for the future to
> >>become ready before we actually send it) or use any other
> >>means of synchronizing between the two localities,
> >>usually a channel is a nice way of accomplishing this.
> >>You can either send the channel over to the other
> >>locality or use the register_as()/connect_to()
> >>functionalities expose by it:
> >>
> >>// locality 1
> >>hpx::lcos::channel c (hpx::find_here());
> >>c.register_as("some-unique-name");  // careful:
> >>returns a future
> >>c.set(T{});// returns a future too
> >>
> >>// locality 2
> >>hpx::lcos::channel c;
> >>c.connect_to("some-unique-name");   // careful:
> >>returns a future
> >>
> >>// this might wait for c to become valid before
> >>calling get()
> >>hpx::future f = c.get();
> >>
> >> on locality 2 'f' becomes ready as soon as c.set() was
> >>called on locality 1. While it does not really matter on
> >>what locality you create the channel (here defined by
> >>hpx::find_here()), I'd advise creating it on the
> >>receiving end of the pipe.
> >>
> >> If you gave us some example code we were able to advise
> >>more concretely.
> >>
> >> Regards Hartmut
> >> ---
> >> http://boost-spirit.com
> >> http://stellar.cct.lsu.edu
> >>
> >>
> >>>
> >>> Steve
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>>
> >>> On 9 Oct 2017, at 15:37, Hartmut Kaiser
> >>> wrote:
> >>>
> >>> SMPD
> >>
> >>
> >
> 
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users