ry likely to have an undefined behavior per
> the standard.
>
> if this seems to work with MPICH, this is not portable anyway, and
> will very likely cause a crash with OpenMPI
>
>
> Cheers,
>
> Gilles
>
> On Fri, Jun 2, 2017 at 10:41 PM, Dahai Guo wrote:
> > s
ell defined size, extent and alignment).
>>
>> There is no construct in C able to tell you if a random number if a valid
>> C "object".
>>
>> George.
>>
>>
>> On Thu, Jun 1, 2017 at 5:42 PM, Dahai Guo > dahai@gmail.com>>
Hi,
if I insert following lines somewhere openmpi, such as ompi/mpi/c/iscatter.c
printf(" --- in MPI_Iscatter\n");
//MPI_Datatype dt00 = (MPI_Datatype) MPI_INT;
*MPI_Datatype dt00 = (MPI_Datatype) -1;*
if(*!ompi_datatype_is_valid(dt00)* ) {
printf(" --- dt00 is NOT valid \n");
}
The attached
today.
>
> George.
>
>
> On Fri, May 5, 2017 at 11:49 AM, Dahai Guo wrote:
>
>> The following code causes memory fault problem. The initial check shows
>> that it seemed caused by *ompi_comm_peer_lookup* with MPI_ANY_SOURCE,
>> which somehow messed up the al
Hi,
The attached test code pass with MPICH well, but has problems with OpenMPI.
There are three tests in the code, the first passes, the second one hangs,
and the third one results in seg. fault and core dump.
The hanging seemed caused by the handle in the function
ompi_coll_libnbc_ialltoallw in
The following code causes memory fault problem. The initial check shows
that it seemed caused by *ompi_comm_peer_lookup* with MPI_ANY_SOURCE, which
somehow messed up the allocated temporary buffer used in SendRecv.
any idea?
Dahai
#include
#include
#include
#include
#include
#include
#inc
atic void backend_fatal_aggregate(char *type,
> }
>
> free(prefix);
> -free(err_msg);
> +if (generated) {
> +free(err_msg);
> + }
> }
>
> /*
>
> George.
>
>
>
> On Thu, May 4, 2017 at 10:03 PM, Jeff Squyres (jsquyres) <
. I can't replicate this on my
> mac. What architecture are you seeing this issue ? How was your OMPI
> compiled ?
>
> Please post the output of ompi_info.
>
> Thanks,
> George.
>
>
>
> On Thu, May 4, 2017 at 5:42 PM, Dahai Guo wrote:
>
>> Those me
check the error code or you can create your own
> function. See MPI 3.1 Chapter 8.
>
> -Nathan
>
> On May 04, 2017, at 02:58 PM, Dahai Guo wrote:
>
> Hi,
>
> Using opemi 2.1, the following code resulted in the core dump, although
> only a simple error msg was exp
Hi,
Using opemi 2.1, the following code resulted in the core dump, although
only a simple error msg was expected. Any idea what is wrong? It seemed
related the errhandler somewhere.
D.G.
*** An error occurred in MPI_Reduce
*** reported by process [3645440001,0]
*** on communicator MPI_CO
I installed intel PSM2 and then configured open mpi as follow.
./configure \
--prefix=$HOME/ompi_install \
--with-psm2=$HOME/PSM2_install/usr \
--with-psm2-libdir=$HOME/PSM2_install/usr/lib64
however, when I ran a Hello prgram, it said
mpirun -n 2 hi0
mca_base_component_reposit
(MPI_COMM_WORLD, &rank); ierr=MPI_Comm_size(MPI_COMM_WORLD,
&Size);
char *value = getenv("OMPI_MCA_apath");
printf(" --- Hi from rank = %d, path0 = %s \n", rank, value );
ierr=MPI_Barrier(MPI_COMM_WORLD); ierr=MPI_Finalize();
return 0;}
From: Jeff Squyre
er, you
checked for the wrong envar. Anything you provide is going to have an
“OMPI_MCA_” attached to the front of it. So for your “apath” example, the envar
will be
OMPI_MCA_apath
HTH
Ralph
> On Feb 24, 2017, at 7:25 AM, Jeff Squyres (jsquyres)
> wrote:
>
> On Feb 24, 201
I mean getenv("OMPI_MCA_apath") in setup_fork.
From: Dahai Guo via devel
To: Jeff Squyres (jsquyres) ; Dahai Guo
; Open MPI Developers List
Cc: Dahai Guo
Sent: Friday, February 24, 2017 9:11 AM
Subject: Re: [OMPI devel] define a new ENV variable in
etc/openmpi-mca-p
owever, If I defined it in the cmd line, mpirun -mca apath=sth .., then I
could get it in setup_fork.
Did I miss something?
Dahai
From: Jeff Squyres (jsquyres)
To: Dahai Guo ; Open MPI Developers List
Cc: Dahai Guo
Sent: Friday, February 24, 2017 8:57 AM
Subject: Re: [OMPI dev
Hi,
If I define a new ENV variable in etc/openmpi-mca-params.conf, what OMPI codes
should I modify in order for this parameter to be delivered to each rank?
Thx,
D. G.___
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/m
What technical materials will be covered in the meeting?
On Thursday, October 8, 2015 2:47 PM, Jeff Squyres (jsquyres)
wrote:
Developers --
It's time to schedule our next face-to-face meeting. IBM has graciously
offered the use of their facilities in Dallas, TX. Apparently hote
5 PM, Jeff Squyres (jsquyres)
wrote:
On Oct 6, 2015, at 10:19 AM, Dahai Guo wrote:
>
> Thanks, Gilles. Some more questions:
>
> 1. how does Open MPI define the priorities of the different collective
> components? what criteria is based on?
The priorities are in the range of [0,
l_tuned_decision_fixed.c
this is how the tuned collective module selects algorithms based on
communicator size and message size.
Cheers,
Gilles
On Sun, Oct 4, 2015 at 11:12 AM, Dahai Guo wrote:
> Thanks, Jeff. I am trying to understand in detail how Open MPI works in the
> run time. What mai
ctions are involved and used
in the process?
Dahai
On Friday, October 2, 2015 7:50 PM, Jeff Squyres (jsquyres)
wrote:
On Oct 2, 2015, at 2:21 PM, Dahai Guo wrote:
>
> Is there any way to trace open mpi internal function calls in a MPI user
> program?
Unfortunately, not easily
Hi,
Is there any way to trace open mpi internal function calls in a MPI user
program? If so, can any one explain it with an example? such as helloworld? I
build open MPI with the VampirTrace options, and compile the following program
with picc-vt,. but I didn't get any tracing info.
Thanks
D.
Hi,
Is there some technical reports/ papers to summarize the collective algorithms
used in OpenMPI?, such as MPI_barrier, MPI_bcast, and MPI_Alltoall?
Dahai
22 matches
Mail list logo