Le 09/02/2011 16:53, Hendryk Bockelmann a écrit :
> Since I am new to hwloc there might be a misunderstanding from my
> side, but I have a problem getting the cpuset of MPI tasks. I just
> want to run a simple MPI program to see on which cores (or CPUs in
> case of hyperthreading or SMT) the tasks run, so that I can arrange my
> MPI communicators.
>
> For the program below I get the following output:
>
> Process 0 of 2 on tide
> Process 1 of 2 on tide
> --> cpuset of process 0 is 0x0000000f
> --> cpuset of process 0 after singlify is 0x00000001
> --> cpuset of process 1 is 0x0000000f
> --> cpuset of process 1 after singlify is 0x00000001
>
> So why do both MPI tasks report the same cpuset?

Hello Hendryk,

Your processes are not bound, there may run anywhere they want.
hwloc_get_cpubind() tells you where they are bound. That's why the
cpuset is 0x0000f first (all the existing logical processors in the
machine).

You want to know where they actually run. It's different from where
there are bound. The former is included in the latter. The former is a
single processor, while the later may be any combination of any processors).

hwloc cannot tell you where a task run. But I am looking at implementing
it. I actually sent a patch to hwloc-devel about it yesterday [1]. You
would just have to replace get_cpubind with get_cpuexec (or whatever the
final function name is).

You should note that such a function would not be guaranteed to return
something true since the process may migrate to another processor in the
meantime.

Also note that hwloc_bitmap_singlify is usually used to "simplify" a
cpuset (to avoid migration between multiple SMT for instance) before
binding a task (calling set_cpubind). It's useless in your code above.

Brice

[1] http://www.open-mpi.org/community/lists/hwloc-devel/2011/02/1915.php



> Here is the program (attached you find the output of
> hwloc-gather-topology.sh):
>
> #include <stdio.h>
> #include <string.h>
> #include "hwloc.h"
> #include "mpi.h"
>
> int main(int argc, char* argv[]) {
>
>    hwloc_topology_t topology;
>    hwloc_bitmap_t cpuset;
>    char *str = NULL;
>    int myid, numprocs, namelen;
>    char procname[MPI_MAX_PROCESSOR_NAME];
>
>    MPI_Init(&argc,&argv);
>    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
>    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
>    MPI_Get_processor_name(procname,&namelen);
>
>    printf("Process %d of %d on %s\n", myid, numprocs, procname);
>
>    hwloc_topology_init(&topology);
>    hwloc_topology_load(topology);
>
>    /* get native cpuset of this process */
>    cpuset = hwloc_bitmap_alloc();
>    hwloc_get_cpubind(topology, cpuset, 0);
>    hwloc_bitmap_asprintf(&str, cpuset);
>    printf("--> cpuset of process %d is %s\n", myid, str);
>    free(str);
>    hwloc_bitmap_singlify(cpuset);
>    hwloc_bitmap_asprintf(&str, cpuset);
>    printf("--> cpuset of process %d after singlify is %s\n", myid, str);
>    free(str);
>
>    hwloc_bitmap_free(cpuset);
>    hwloc_topology_destroy(topology);
>
>    MPI_Finalize();
>    return 0;
> }
>
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/hwloc-users
>   

Reply via email to