"I can't upgrade Open MPI on the computing nodes of this system" is false.
Open-MPI can be installed entirely in userspace in your home directory.
If you read the MPI_Comm_create_group paper, there should be instructions
on how to implement this using MPI-2 features. Jim Dinan wrote a working
Le 08/06/2017 16:58, Samuel Thibault a écrit :
> Hello,
>
> Maureen Chew, on jeu. 08 juin 2017 10:51:56 -0400, wrote:
>> Should finding cache & pci info work?
> AFAWK, there is no user-available way to get cache information on
> Solaris, so it's not implemented in hwloc.
And even if prtpicl
MPI_Comm_create_groups is an MPI-3.0+ function. 1.6.x is MPI-2.1. You can use
the macros MPI_VERSION and MPI_SUBVERSION to check the MPI version.
You will have to modify your code if you want it to work with older versions of
Open MPI.
-Nathan
On Jun 08, 2017, at 03:59 AM, Arham Amouie via
Hello,
Maureen Chew, on jeu. 08 juin 2017 10:51:56 -0400, wrote:
> Should finding cache & pci info work?
AFAWK, there is no user-available way to get cache information on
Solaris, so it's not implemented in hwloc.
Concerning pci, you need libpciaccess to get PCI information.
Samuel
Just built hwloc-1.11.7 on Solaris 11.3 & SPARC T7-2. Should finding
cache & pci info work? Not sure if I missed some build flags as I just did a
vanilla build.
Poked around in the mailist user and devel archives but didnt’ find anything.
Build script
#!/bin/sh
export
Hi!
So I know from searching the archive that this is a repeated topic of
discussion here, and apologies for that, but since it's been a year or
so I thought I'd double-check whether anything has changed before
really starting to tear my hair out too much.
Is there a combination of MCA
MPI_Comm_create_group was not available in Open MPI v1.6.
so unless you are willing to create your own subroutine in your
application, you'd rather upgrade to Open MPI v2
i recomment you configure Open MPI with
--disable-dlopen --prefix=
unless you plan to scale on thousands of nodes, you should
Hello. Open MPI 1.6.2 is installed on the cluster I'm using. At the moment I
can't upgrade Open MPI on the computing nodes of this system. My C code
contains many calls to MPI functions. When I try to 'make' this code on the
cluster, the only error that I get is "undefined reference to