Steve,

> That implementation worked well for me.
> 
> Buy just a little detail:
> If I want to use vector of shared_future the only way I got it working is
> as follow:
> 
> auto temp_out = hpx::split_future(hpx::dataflow(act, locality, inputs),
> out_count);
> for(int o=0; o< out_count; o++)
>     outputs[o] = std::move(temp_out[o]);
> 
> Which I guess is fine…
> 
> But to make it more compact I tried also to force the usage of
> shared_future doing:
> hpx::split_future(hpx::shared_future<std::vector<Something>>(std::move(hpx
> ::dataflow(act, locality, inputs))), out_count)
> 
> And I still get a vector of future not a vector of shared_future. Also the
> std::move on the entire vector does not help. But I don’t think it’s a big
> deal.

That design is intentional. No matter what future you give to split_future 
(shared or not), it gives you a container of (unique) futures. This shouldn't 
be a problem, though - as you can turn any unique future into a shared one by 
simply calling .share():

    hpx::future<T> f = ...;
    hpx::shared_future<T> sf = f.share();  // note: invalidates 'f'

alternatively, a shared_future<T> is implicitly constructible from a future<T>:

    hpx::shared_future<T> sf{std::move(f)};

I don't think there is any need for the acrobatics you tried above.

> About channels
> The only example I’ve found of usage of “remote" channels is
> this: https://stellar-group.github.io/hpx/docs/html/hpx/manual/lcos.html
> 
> In the last example of the Channel paragraph the creator of the channel is
> “sending” a series of values from a vector.
> I guess the receiver is supposed to know how many elements is going to
> receive...
> 
> But what if I want to send a structure or an std::vector over a channel?
> (since I don’t think is efficient to send bytes one by one)
> I tried to register the same datatype (Payload) that I use in my
> HPX_PLAIN_ACTIONS (successfully) but I get an error like:
> 
> —————————
> use of undeclared
>       identifier ‘__channel_Payload'
> HPX_REGISTER_CHANNEL(Payload);
> 
> hpx_install/include/hpx/lcos/server/channel.hpp:150:5: note:
>       expanded from macro 'HPX_REGISTER_CHANNEL'
>     HPX_REGISTER_CHANNEL_(__VA_ARGS__)
> 
> hpx_install/include/hpx/lcos/server/channel.hpp:153:19: note:
>       expanded from macro 'HPX_REGISTER_CHANNEL_'
>     HPX_PP_EXPAND(HPX_PP_CAT(
>     \
>                   ^
> hpx_install/include/hpx/util/detail/pp/cat.hpp:21:30: note:
>       expanded from macro 'HPX_PP_CAT'
> #    define HPX_PP_CAT(a, b) HPX_PP_CAT_I(a, b)
>                              ^
> note: (skipping 4 expansions in backtrace; use -fmacro-backtrace-limit=0
> to see all)
> hpx_install/include/hpx/util/detail/pp/cat.hpp:21:30: note:
>       expanded from macro 'HPX_PP_CAT'
> hpx_install/include/hpx/util/detail/pp/cat.hpp:29:32: note:
>       expanded from macro 'HPX_PP_CAT_I'
> #    define HPX_PP_CAT_I(a, b) a ## b
>                                ^
> <scratch space>:9:1: note: expanded from here
> __channel_DataFlow

Could you give us a small reproducing example, please? I think this is a bug, 
however I believe we have fixed it recently (see #2870). What version of HPX is 
that happening with?

> —————————
> 
> The class Payload simply implements serialize and brings inside a
> std::vector<char> as follow:
> 
> ——
> std::vector<char> mBuffer;
> 
> friend class hpx::serialization::access;
> 
> template <typename Archive>
> void serialize(Archive& ar, const unsigned int version)
> {    ar & mBuffer;   }
> 
> ——
> 
> This datatype is fine for asynchronous remote actions.
> So I wonder: is it allowed to send std::vectors over channels?

Yes, definitely. Just create a channel<vector<T>> and you will be able to send 
vectors of T's (channel<Payload> should work as well).

Regards Hartmut
---------------
http://boost-spirit.com
http://stellar.cct.lsu.edu


> Thanks,
> Steve
> 
> 
> 
> On 10 Oct 2017, at 10:51, Hartmut Kaiser <hartmut.kai...@gmail.com> wrote:
> 
> Steve,
> 
> Please see #2942 for a first implementation of split_future for
> std::vector. Please let us know on the ticket if this solves your problem.
> We will merge things to master as soon as you're fine with it.
> 
> Thanks!
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
> 
> -----Original Message-----
> From: Steve Petruzza [mailto:spetru...@sci.utah.edu]
> Sent: Tuesday, October 10, 2017 8:46 AM
> To: hartmut.kai...@gmail.com
> Cc: hpx-users@stellar.cct.lsu.edu
> Subject: Re: [hpx-users] Strong scalability of hpx dataflow and async
> 
> Yes that would be very useful.
> And yes I know upfront the size.
> 
> Thank you!
> Steve
> 
> 
> On Oct 10, 2017, at 7:40 AM, Hartmut Kaiser <hartmut.kai...@gmail.com>
> wrote:
> 
> 
> Steve,
> 
> 
> Your suggestions are already very useful. This channels mechanism
> looks
> 
> awesome, I will give it a try.
> 
> One other thing, where I can actually give you a code example, is the
> following:
> - an async function returns a future of a vector
> - I need to dispatch the single elements of this vector as separate
> futures, cause those will be used (separately) by other async
> functions
> 
> 
> Here is what I am doing right now:
> 
> hpx::future<std::vector<Something>> out_v = hpx::dataflow(exe_act,
> locality, inputs);
> 
> std::vector<hpx::future<Something>> outputs_fut(out_count);
> 
> for(int i=0; i < out_count; i++){
> outputs_fut[i] = hpx::dataflow(
>            [i, &out_v]() -> Something
>            {
>              return out_v.get()[i];
>           }
>  );
> }
> 
> This solution works but I think that the loop is just creating a bunch
> of
> 
> useless async calls just to take out one of the elems as a single
> future.
> 
> 
> Is there a better way of doing this? Basically to pass from a
> future<vector> to a vector<future> in HPX?
> 
> We do have the split_future facility doing exactly that but just for
> containers with a size known at compile time (pair, tuple, array), see
> here: https://github.com/STEllAR-
> GROUP/hpx/blob/master/hpx/lcos/split_future.hpp. Frankly, I'm not sure
> anymore why we have not added the same for std::vector as well. From
> looking at the code it should just work to do something similar as
> we've
> 
> implemented for std::array. I opened a new ticket to remind me to
> implement split_future for std::vector (https://github.com/STEllAR-
> GROUP/hpx/issues/2940).
> 
> After looking into this a bit more I now understand why we have not
> implemented split_future for std::vector. Please consider:
> 
> 
>   std::vector<future<T>>
>       split_future(future<std::vector<T>> && f);
> 
> in order for this to work efficiently we need to know how many elements
> are stored in the input vector without waiting for the future to become
> ready (as waiting for the future to become ready just for this would
> defeat the purpose). But have no way of knowing how many elements will be
> held by the vector before that.
> 
> 
> What I could do is to implement:
> 
>   std::vector<future<T>>
>       split_future(future<std::vector<T>> && f, std::size_t size);
> 
> (with 'size' specifying the number of elements the vector is expected to
> hold) as in some circumstances you know upfront how many elements to
> expect.
> 
> 
> Would that be of use to you?
> 
> Thanks!
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
> 
> 
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
> 
> Thank you,
> Steve
> 
> p.s.: I also tried to use an action which runs on the same locality
> for
> 
> the second dataflow.
> 
> On 9 Oct 2017, at 16:56, Hartmut Kaiser <hartmut.kai...@gmail.com>
> wrote:
> 
> 
> Steve,
> 
> 
> The number of cores per node is 32, so the 8 threads * 4 cores should
> be
> 
> fine (I tried many variants anyway).
> 
> The SPMD implementation seems like the way to go, but after I split my
> tasks into different localities how can I express data dependencies
> between them?
> 
> Let’s say that I have tasks 0-10 in locality A and tasks 11-21 in
> locality
> 
> B. Now, the task 15 (in locality B) requires some data produced by
> task
> 
> 7
> 
> (in locality A).
> 
> Should I encode these data dependencies in terms of futures when I
> split
> 
> the tasks into the two localities?
> 
> Yes, either send the future over the wire (which might have surprising
> effects as we wait for the future to become ready before we actually
> send
> 
> it) or use any other means of synchronizing between the two
> localities,
> 
> usually a channel is a nice way of accomplishing this. You can either
> send
> 
> the channel over to the other locality or use the
> register_as()/connect_to() functionalities expose by it:
> 
>  // locality 1
>  hpx::lcos::channel<T> c (hpx::find_here());
>  c.register_as("some-unique-name");  // careful: returns a
> future<void>
> 
>  c.set(T{});    // returns a future too
> 
>  // locality 2
>  hpx::lcos::channel<T> c;
>  c.connect_to("some-unique-name");   // careful: returns a
> future<void>
> 
> 
>  // this might wait for c to become valid before calling get()
>  hpx::future<T> f = c.get();
> 
> on locality 2 'f' becomes ready as soon as c.set() was called on
> locality
> 
> 1. While it does not really matter on what locality you create the
> channel
> 
> (here defined by hpx::find_here()), I'd advise creating it on the
> receiving end of the pipe.
> 
> If you gave us some example code we were able to advise more
> concretely.
> 
> 
> Regards Hartmut
> ---------------
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
> 
> 
> 
> 
> Steve
> 
> 
> 
> 
> 
> 
> 
> On 9 Oct 2017, at 15:37, Hartmut Kaiser <hartmut.kai...@gmail.com>
> wrote:
> 
> 
> SMPD
> 
> 


_______________________________________________
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to