Re: [OMPI users] Do idle MPI threads consume clock cycles?

2019-02-25 Thread Howard Pritchard
Hello Mark,

You may want to checkout this package:

https://github.com/lanl/libquo

Another option would be to do something like use an MPI_Ibarrier in the
application
with all the MPI processes but rank 0 going into a loop over waiting for
completion of the barrier
and doing a sleep.  Once rank 0 had completed the OpenMP work, it would
then enter the
barrier and wait for completion.

This type of problem may be helped in a future MPI that supports the notion
of MPI Sessions.
With this approach, you would initialize one MPI session for normal
messaging behavior, using
polling for fast processing of messages.  Your MPI library would use this
for its existing messaging.
You could initialize a second MPI session to use blocking methods for
message receipt.  You would
use a communicator derived from the second session to do what's described
above for the loop
with sleep on an Ibarrier.

Good luck,

Howard


Am Do., 21. Feb. 2019 um 11:25 Uhr schrieb Mark McClure <
mark.w.m...@gmail.com>:

> I have the following, rather unusual, scenario...
>
> I have a program running with OpenMP on a multicore computer. At one point
> in the program, I want to use an external package that is written to
> exploit MPI, not OpenMP, parallelism. So a (rather awkward) solution could
> be to launch the program in MPI, but most of the time, everything is being
> done in a single MPI process, which is using OpenMP (ie, run my current
> program in a single MPI process). Then, when I get to the part where I need
> to use the external package, distribute out the information to all the MPI
> processes, run it across all, and then pull them back to the master
> process. This is awkward, but probably better than my current approach,
> which is running the external package on a single processor (ie, not
> exploiting parallelism in this time-consuming part of the code).
>
> If I use this strategy, I fear that the idle MPI processes may be
> consuming clock cycles while I am running the rest of the program on the
> master process with OpenMP. Thus, they may compete with the OpenMP threads.
> OpenMP does not close threads between every pragma, but OMP_WAIT_POLICY can
> be set to sleep idle threads (actually, this is the default behavior). I
> have not been able to find any equivalent documentation regarding the
> behavior of idle threads in MPI.
>
> Best regards,
> Mark
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [hwloc-users] Build warnings with hwloc-2.0.3

2019-02-25 Thread Balaji, Pavan via hwloc-users
Hi Brice,

> On Feb 25, 2019, at 2:27 AM, Brice Goglin  wrote:
> Are you sure you're not passing -Wstack-usage? My Ubuntu 18.04 with
> latest gcc-7 (7.3.0-27ubuntu1~18.04) doesn't show any of those warnings.

Yes, you are right, -Wstack-usage was explicitly added too.  Sorry, I missed 
the fact that it wasn't default in -Wall.

> It looks like all these warnings are caused by C99 variable-length
> arrays (except 2 that I don't understand). I know the kernel devs
> stopped using VLA recently, and it looks like C11 made them optional.
> But are we really supposed to stop using VLA already?

They are optional, which means we cannot assume them for portability reasons.  
FWIW, we have made the rest of mpich stack-usage clean.

  -- Pavan

___
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users


Re: [OMPI users] HDF5 1.10.4 "make check" problems w/OpenMPI 3.1.3

2019-02-25 Thread Peter Kjellström
FYI, Just noticed this post from the hdf group:

https://forum.hdfgroup.org/t/hdf5-and-openmpi/5437

/Peter K


pgpmcS_mBlpzB.pgp
Description: OpenPGP digital signature
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [hwloc-users] Build warnings with hwloc-2.0.3

2019-02-25 Thread Brice Goglin
Hello Pavan,

Are you sure you're not passing -Wstack-usage? My Ubuntu 18.04 with
latest gcc-7 (7.3.0-27ubuntu1~18.04) doesn't show any of those warnings.

It looks like all these warnings are caused by C99 variable-length
arrays (except 2 that I don't understand). I know the kernel devs
stopped using VLA recently, and it looks like C11 made them optional.
But are we really supposed to stop using VLA already?

Brice



Le 25/02/2019 à 02:07, Balaji, Pavan via hwloc-users a écrit :
> Folks,
>
> I'm getting the below build warnings with hwloc-2.0.3, gcc-7.3 on Ubuntu 
> (with -Wall -O2):
>
> 8<
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/distances.c:
>  In function 'hwloc__groups_by_distances':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/distances.c:817:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc__groups_by_distances(struct hwloc_topology *topology,
>  ^~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology.c:
>  In function 'hwloc_propagate_symmetric_subtree':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology.c:2388:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc_propagate_symmetric_subtree(hwloc_topology_t topology, hwloc_obj_t 
> root)
>  ^
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-synthetic.c:
>  In function 'hwloc_synthetic_process_indexes':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-synthetic.c:71:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc_synthetic_process_indexes(struct hwloc_synthetic_backend_data_s *data,
>  ^~~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-xml.c:
>  In function 'hwloc__xml_export_object_contents':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-xml.c:1920:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc__xml_export_object_contents (hwloc__xml_export_state_t state, 
> hwloc_topology_t topology, hwloc_obj_t obj, unsigned long flags)
>  ^
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:
>  In function 'hwloc_linux_get_area_membind':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:1883:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc_linux_get_area_membind(hwloc_topology_t topology, const void *addr, 
> size_t len, hwloc_nodeset_t nodeset, hwloc_membind_policy_t *policy, int 
> flags __hwloc_attribute_unused)
>  ^~~~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:
>  In function 'hwloc_linux_set_thisthread_membind':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:1737:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc_linux_set_thisthread_membind(hwloc_topology_t topology, 
> hwloc_const_nodeset_t nodeset, hwloc_membind_policy_t policy, int flags)
>  ^~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:
>  In function 'hwloc_linux_get_thisthread_membind':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:1848:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  hwloc_linux_get_thisthread_membind(hwloc_topology_t topology, 
> hwloc_nodeset_t nodeset, hwloc_membind_policy_t *policy, int flags 
> __hwloc_attribute_unused)
>  ^~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:
>  In function 'hwloc_linux__get_allowed_resources':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-linux.c:4426:13:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  static void hwloc_linux__get_allowed_resources(hwloc_topology_t topology, 
> const char *root_path, int root_fd, char **cpuset_namep)
>  ^~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-x86.c:
>  In function 'cpuiddump_read':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-x86.c:69:1:
>  warning: stack usage might be unbounded [-Wstack-usage=]
>  cpuiddump_read(const char *dirpath, unsigned idx)
>  ^~
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-x86.c:
>  In function 'hwloc_x86_component_instantiate':
> ../../../../../../../../../mpich/src/pm/hydra/tools/topo/hwloc/hwloc/hwloc/topology-x86.c:1456:1:
>  warning: stack usage might be unbounded