Re: [hpx-users] primary_namespace::resolve_grid

2017-05-23 Thread ct clmsn
Hartmut,

The code related to this error was derived from the sequential, recursive,
dataflow code from the last week. It sounds like a reasonable
generalization to some of these issues relates to variables going out of
scope (that's a great answer to the question!). I'll be modifying the test
and will send over the modified test to if there's still an issue.

Chris

On Sun, May 21, 2017 at 6:49 AM, Hartmut Kaiser 
wrote:

> Chris,
>
> Sorry for the late reply.
>
> > I've an application that is running a dataflow of asynchronous functions
> > over a linear sequence of doubles stored in a partitioned_vector.
> >
> > The application processes a fair number of elements before causing a
> > segmentation fault. Upon inspection of the core file, the segmentation
> > fault is happening when
> > hpx::agas::server::primary_namespace::resolve_gid(hpx::naming::gid_type)
> > () from libhpx.so.1 is called by the runtime.
> > I've compiled the HPX runtime using the MPI ParcelPort. The application
> is
> > running on a single remote node (using slurm for scheduling).
> > Any suggestions or recommendations of how to further debug the
> application
> > or any runtime flags to help further diagnose implementation errors would
> > be appreciated.
>
> That is very difficult to diagnose from the distance. It looks like that
> something went out of scope too early and an attempt to access it went
> hiwire. Can you give us the code to have a closer look?
>
> Regards Hartmut
> ---
> http://boost-spirit.com
> http://stellar.cct.lsu.edu
>
>
> ___
> hpx-users mailing list
> hpx-users@stellar.cct.lsu.edu
> https://mail.cct.lsu.edu/mailman/listinfo/hpx-users
>
___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users


Re: [hpx-users] primary_namespace::resolve_grid

2017-05-21 Thread Hartmut Kaiser
Chris,

Sorry for the late reply.

> I've an application that is running a dataflow of asynchronous functions
> over a linear sequence of doubles stored in a partitioned_vector.
> 
> The application processes a fair number of elements before causing a
> segmentation fault. Upon inspection of the core file, the segmentation
> fault is happening when
> hpx::agas::server::primary_namespace::resolve_gid(hpx::naming::gid_type)
> () from libhpx.so.1 is called by the runtime.
> I've compiled the HPX runtime using the MPI ParcelPort. The application is
> running on a single remote node (using slurm for scheduling).
> Any suggestions or recommendations of how to further debug the application
> or any runtime flags to help further diagnose implementation errors would
> be appreciated.

That is very difficult to diagnose from the distance. It looks like that 
something went out of scope too early and an attempt to access it went hiwire. 
Can you give us the code to have a closer look?

Regards Hartmut
---
http://boost-spirit.com
http://stellar.cct.lsu.edu


___
hpx-users mailing list
hpx-users@stellar.cct.lsu.edu
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users