Update:
> I have yet to run a full regression test on our actual code to ensure that
> there are no other side effects. I don’t expect any, though.
Indeed. Fixes all regressions that I had observed.
Best wishes
Volker
> On Jul 27, 2017, at 11:47 AM, Volker Blum wrote:
>
Dear Gilles,
Thank you! Indeed, this appears to address the issue in my test.
I have yet to run a full regression test on our actual code to ensure that
there are no other side effects. I don’t expect any, though.
**
Interestingly, removing '-lmpi_usempif08 -lmpi_usempi_ignore_tkr’ actually
Volker,
since you are only using
include 'mpif.h'
a workaround is you edit your /.../share/openmpi/mpifort-wrapper-data.txt
and simply remove '-lmpi_usempif08 -lmpi_usempi_ignore_tkr'
Cheers,
Gilles
On 7/27/2017 3:28 PM, Volker Blum wrote:
Thanks!
If you wish, please also keep me
Thanks!
If you wish, please also keep me posted.
Best wishes
Volker
> On Jul 27, 2017, at 7:50 AM, Gilles Gouaillardet
> wrote:
>
> Thanks Jeff for your offer, i will contact you off-list later
>
>
> i tried a gcc+gfortran and gcc+ifort on both linux and OS
Thanks Jeff for your offer, i will contact you off-list later
i tried a gcc+gfortran and gcc+ifort on both linux and OS X
so far, only gcc+ifort on OS X is failing
i will try icc+ifort on OS X from now
short story, MPI_IN_PLACE is not recognized as such by the ompi
fortran wrapper, and i do not
Thanks! That’s great. Sounds like the exact combination I have here.
Thanks also to George. Sorry that the test did not trigger on a more standard
platform - that would have simplified things.
Best wishes
Volker
> On Jul 27, 2017, at 3:56 AM, Gilles Gouaillardet wrote:
>
>
Does this happen with ifort but not other Fortran compilers? If so, write
me off-list if there's a need to report a compiler issue.
Jeff
On Wed, Jul 26, 2017 at 6:59 PM Gilles Gouaillardet
wrote:
> Folks,
>
>
> I am able to reproduce the issue on OS X (Sierra) with stock gcc
Folks,
I am able to reproduce the issue on OS X (Sierra) with stock gcc (aka
clang) and ifort 17.0.4
i will investigate this from now
Cheers,
Gilles
On 7/27/2017 9:28 AM, George Bosilca wrote:
Volker,
Unfortunately, I can't replicate with icc. I tried on a x86_64 box
with Intel
Volker,
Unfortunately, I can't replicate with icc. I tried on a x86_64 box with
Intel compiler chain 17.0.4 20170411 to no avail. I also tested the
3.0.0-rc1 tarball and the current master, and you test completes without
errors on all cases.
Once you figure out an environment where you can
Thanks! Yes, trying with Intel 2017 would be very nice.
> On Jul 26, 2017, at 6:12 PM, George Bosilca wrote:
>
> No, I don't have (or used where they were available) the Intel compiler. I
> used clang and gfortran. I can try on a Linux box with the Intel 2017
>
No, I don't have (or used where they were available) the Intel compiler. I
used clang and gfortran. I can try on a Linux box with the Intel 2017
compilers.
George.
On Wed, Jul 26, 2017 at 11:59 AM, Volker Blum wrote:
> Did you use Intel Fortran 2017 as well?
>
> (I’m
Did you use Intel Fortran 2017 as well?
(I’m asking because I did see the same issue with a combination of an earlier
Intel Fortran 2017 version and OpenMPI on an Intel/Infiniband Linux HPC machine
… but not Intel Fortran 2016 on the same machine. Perhaps I can revive my
access to that
Thanks!
I tried ‘use mpi’, which compiles fine.
Same result as with ‘include mpif.h', in that the output is
* MPI_IN_PLACE does not appear to work as intended.
* Checking whether MPI_ALLREDUCE works at all.
* Without MPI_IN_PLACE, MPI_ALLREDUCE appears to work.
Hm. Any other thoughts?
Volker,
With mpi_f08, you have to declare
Type(MPI_Comm) :: mpi_comm_global
(I am afk and not 100% sure of the syntax)
A simpler option is to
use mpi
Cheers,
Gilles
Volker Blum wrote:
>Hi Gilles,
>
>Thank you very much for the response!
>
>Unfortunately, I don’t have
Hi Gilles,
Thank you very much for the response!
Unfortunately, I don’t have access to a different system with the issue right
now. As I said, it’s not new; it just keeps creeping up unexpectedly again on
different platforms. What puzzles me is that I’ve encountered the same problem
with low
Volker,
thanks, i will have a look at it
meanwhile, if you can reproduce this issue on a more mainstream
platform (e.g. linux + gfortran) please let me know.
since you are using ifort, Open MPI was built with Fortran 2008
bindings, so you can replace
include 'mpif.h'
with
use mpi_f08
and who
Dear Gilles,
Thank you very much for the fast answer.
Darn. I feared it might not occur on all platforms, since my former Macbook
(with an older OpenMPI version) no longer exhibited the problem, a different
Linux/Intel Machine did last December, etc.
On this specific machine, the configure
Volker,
i was unable to reproduce this issue on linux
can you please post your full configure command line, your gnu
compiler version and the full test program ?
also, how many mpi tasks are you running ?
Cheers,
Gilles
On Wed, Jul 26, 2017 at 4:25 PM, Volker Blum
Hi,
I tried openmpi-3.0.0rc1.tar.gz using Intel Fortran 2017 and gcc on a current
MacOS system. For this version, it seems to me that MPI_IN_PLACE returns
incorrect results (while other MPI implementations, including some past OpenMPI
versions, work fine).
This can be seen with a simple
MPI Users
Subject: Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV
Ok, I think we have this resolved in trunk and the fix will go into 1.7.4. The
check for MPI_IN_PLACE was wrong in the mpif-h bindings. The fix was tested
with your reproducer. Both MPI_SCATTER and MPI_SCATTERV had
,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,IERR)
> ENDIF
>
> OPEN(71+MYPN,FORM='FORMATTED',POSITION='APPEND')
> WRITE(71+MYPN,'(3E15.7)') RARR1(1:300)
> CLOSE(71+MYPN)
>
> CALL MPI_FINALIZE(IERR)
>
> END PROGRAM MAIN
>
>
> ___
RAM MAIN
From: users [users-boun...@open-mpi.org] on behalf of Nathan Hjelm
[hje...@lanl.gov]
Sent: Wednesday, October 09, 2013 12:37 PM
To: Open MPI Users
Subject: Re: [OMPI users] MPI_IN_PLACE with GATHERV, AGATHERV, and SCATERV
These functions are tested nightly and t
These functions are tested nightly and there has been no indication any of these
functions fail with MPI_IN_PLACE. Can you provide a reproducer?
-Nathan
HPC-3, LANL
On Tue, Oct 08, 2013 at 07:40:50PM +, Gerlach, Charles A. wrote:
>I have an MPI code that was developed using MPICH1 and
"I have made a test case..." means there is little reason not to
attach said test case to the email for verification :-)
The following is in mpi.h.in in the OpenMPI trunk.
=
/*
* Just in case you need it. :-)
*/
#define OPEN_MPI 1
/*
* MPI version
*/
#define
I have an MPI code that was developed using MPICH1 and OpenMPI before the MPI2
standards became commonplace (before MPI_IN_PLACE was an option).
So, my code has many examples of GATHERV, AGATHERV and SCATTERV, where I pass
the same array in as the SEND_BUF and the RECV_BUF, and this has worked
On Wed, Sep 11, 2013, at 13:24, Jeff Squyres (jsquyres) wrote:
> On Sep 11, 2013, at 7:22 PM, Hugo Gagnon
> wrote:
>
> >> This is definitely a puzzle, because I just installed gcc 4.8.1 on my
> >> 10.8.4 OS X MBP,
> >
> > I also just recompiled gcc 4.8.1_3
On Sep 11, 2013, at 7:22 PM, Hugo Gagnon
wrote:
>> This is definitely a puzzle, because I just installed gcc 4.8.1 on my
>> 10.8.4 OS X MBP,
>
> I also just recompiled gcc 4.8.1_3 from MacPorts, and will recompile
> openmpi 1.6.5 myself rather than using
On Wed, Sep 11, 2013, at 12:26, Jeff Squyres (jsquyres) wrote:
> On Sep 10, 2013, at 2:33 PM, Hugo Gagnon
> wrote:
>
> > I only get the correct output when I use the more "conventional" syntax:
> >
> > ...
> > call
On Sep 10, 2013, at 2:33 PM, Hugo Gagnon
wrote:
> I only get the correct output when I use the more "conventional" syntax:
>
> ...
> call MPI_Allreduce(a_loc,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
> ...
What is a_loc? I'm assuming you know it can't
I only get the correct output when I use the more "conventional" syntax:
...
call MPI_Allreduce(a_loc,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
...
However, I get the wrong output when I use MPI_IN_PLACE:
...
MPI_Allreduce(MPI_IN_PLACE,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
...
hence
On Sep 7, 2013, at 5:14 AM, Hugo Gagnon
wrote:
> $ openmpif90 test.f90
> $ openmpirun -np 2 a.out
> 0 4 6
> 1 4 6
>
> Now I'd be curious to know why your OpenMPI implementation handles
>
What Fortran compiler is your OpenMPI build with? Some fortran's don't
understand MPI_IN_PLACE. Do a 'fortran MPI_IN_PLACE' search to see
several instances.
T. Rosmond
On Sat, 2013-09-07 at 10:16 -0400, Hugo Gagnon wrote:
> Nope, no luck. My environment is:
>
> OpenMPI 1.6.5
> gcc 4.8.1
>
Nope, no luck. My environment is:
OpenMPI 1.6.5
gcc 4.8.1
Mac OS 10.8
I found a ticket reporting a similar problem on OS X:
https://svn.open-mpi.org/trac/ompi/ticket/1982
It said to make sure $prefix/share/ompi/mpif90-wrapper-data.txt had the
following line:
Just as an experiment, try replacing
use mpi
with
include 'mpif.h'
If that fixes the problem, you can confront the OpenMPI experts
T. Rosmond
On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me... Here's the
> version without
I'm afraid I can't answer that. Here's my environment:
OpenMPI 1.6.1
IFORT 12.0.3.174
Scientific Linux 6.4
What fortran compiler are you using?
T. Rosmond
On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me... Here's the
>
Thanks for the input but it still doesn't work for me... Here's the
version without MPI_IN_PLACE that does work:
program test
use mpi
integer :: ierr, myrank, a(2), a_loc(2) = 0
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
if (myrank == 0) then
a_loc(1) = 1
a_loc(2) = 2
Hello,
Your syntax defining 'a' is not correct. This code works correctly.
program test
use mpi
integer :: ierr, myrank, a(2) = 0
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
if (myrank == 0) then
a(1) = 1
a(2) = 2
else
a(1) = 3
a(2) = 4
endif
call
Hello,
I'm trying to run this bit of code:
program test
use mpi
integer :: ierr, myrank, a(2) = 0
call MPI_Init(ierr)
call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
if (myrank == 0) a(1) = 1; a(2) = 2
if (myrank == 1) a(1) = 3; a(2) = 4
call
On Jan 4, 2013, at 2:55 AM CST, Torbjörn Björkman wrote:
> It seems that a very old bug (svn.open-mpi.org/trac/ompi/ticket/1982) is
> playing up when linking fortran code with mpicc on Mac OS X 10.6 and the
> Macports distribution openmpi @1.6.3_0+gcc44. I got it working by reading up
> on
It seems that a very old bug (svn.open-mpi.org/trac/ompi/ticket/1982) is
playing up when linking fortran code with mpicc on Mac OS X 10.6 and the
Macports distribution openmpi @1.6.3_0+gcc44. I got it working by reading
up on this discussion thread:
Dear all:
I ran this simple fortran code and got unexpected result:
!
program reduce
implicit none
include 'mpif.h'
integer :: ierr, rank
real*8 :: send(5)
call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world,rank,ierr)
send = real(rank)
print *,
Hi,
I am confused about the syntax of the "in place" variant of
MPI_Reduce, in particular about the significance of the recvbuf
for the non-root processes. I.e., is the following legal?
buf = (double *)malloc(l*sizeof(double);
random(buf, l); /* fill buf with something */
if (myid == 0) {
-requ...@open-mpi.org wrote:
Message: 2
Date: Sat, 1 Aug 2009 07:44:47 -0400
From: Jeff Squyres <jsquy...@cisco.com>
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
withMPI_REDUCE / MPI_ALLREDUCE
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <ca25c
E fortran constant is slightly
different from the F77 binding, i.e. instead of 0x50920 I get 0x508e0)
Thanks for your help,
Ricardo
On Jul 29, 2009, at 17:00 , users-requ...@open-mpi.org wrote:
Message: 2
Date: Wed, 29 Jul 2009 07:54:38 -0500
From: Jeff Squyres <jsquy...@cisco.com>
>
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran
withMPI_REDUCE / MPI_ALLREDUCE
To: "Open MPI Users" <us...@open-mpi.org>
Message-ID: <986510b6-7103-4d7b-b7d6-9d8afdc19...@cisco.com>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed; delsp=
om: George Bosilca <bosi...@eecs.utk.edu>
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran with
MPI_REDUCE / MPI_ALLREDUCE
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <c0f59401-0a63-4eb8-804b-51d290712...@eecs.utk.ed
utl.pt/golp/
On Jul 28, 2009, at 17:00 , users-requ...@open-mpi.org wrote:
Message: 1
Date: Tue, 28 Jul 2009 11:16:34 -0400
From: George Bosilca <bosi...@eecs.utk.edu>
Subject: Re: [OMPI users] OMPI users] MPI_IN_PLACE in Fortran with
MPI_REDUCE / MPI_ALLREDUCE
To: Open MPI Users &
Hi George
I don't think this is a library mismatch. I just followed your
instructions and got:
$ otool -L a.out
a.out:
/opt/openmpi/1.3.3-g95-32/lib/libmpi_f77.0.dylib (compatibility
version 1.0.0, current version 1.0.0)
/opt/openmpi/1.3.3-g95-32/lib/libmpi.0.dylib (compatibility version
ge: 4
Date: Mon, 27 Jul 2009 17:13:23 -0400
From: George Bosilca <bosi...@eecs.utk.edu>
Subject: Re: [OMPI users] MPI_IN_PLACE in Fortran with MPI_REDUCE /
MPI_ALLREDUCE
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <966a51c3-a15f-425b-a6b0-81221033c...@eecs.utk.edu&
/
On Jul 28, 2009, at 4:24 , users-requ...@open-mpi.org wrote:
Message: 4
Date: Mon, 27 Jul 2009 17:13:23 -0400
From: George Bosilca <bosi...@eecs.utk.edu>
Subject: Re: [OMPI users] MPI_IN_PLACE in Fortran with MPI_REDUCE /
MPI_ALLREDUCE
To: Open MPI Users <us...@open-mpi.org>
Hi guys
I'm having a little trouble using MPI_IN_PLACE with MPI_REDUCE /
MPI_ALLREDUCE in Fortran. If I try to MPI_IN_PLACE with C bindings it
works fine running on 2 nodes:
Result:
3.00 3.00 3.00 3.00
Regardless of using MPI_Reduce or MPI_Allreduce. However, this fails
CE only
eliminates data
movement on root, right?
David
* Correspondence *
From: Jeff Squyres <jsquy...@open-mpi.org>
Reply-To: Open MPI Users <us...@open-mpi.org>
Date: Fri, 3 Mar 2006 19:18:52 -0500
To: Open MPI Users <us...@open-mpi.org>
Subject: Re: [OMPI users] MPI_
On Mar 6, 2006, at 3:38 PM, Xiaoning (David) Yang wrote:
I'm not quite sure how collective computation calls work. For
example, for an MPI_REDUCE with MPI_SUM, do all the processes
collect values from all the processes and calculate the sum and put
result in recvbuf on root? Sounds
res <jsquy...@open-mpi.org>
> Reply-To: Open MPI Users <us...@open-mpi.org>
> Date: Mon, 6 Mar 2006 13:22:23 -0500
> To: Open MPI Users <us...@open-mpi.org>
> Subject: Re: [OMPI users] MPI_IN_PLACE
>
> Generally, yes. There are some corner cases where we have to
&g
movement on root, right?
David
* Correspondence *
From: Jeff Squyres <jsquy...@open-mpi.org>
Reply-To: Open MPI Users <us...@open-mpi.org>
Date: Fri, 3 Mar 2006 19:18:52 -0500
To: Open MPI Users <us...@open-mpi.org>
Subject: Re: [OMPI users] MPI_IN_PLACE
On Mar 3
> To: Open MPI Users <us...@open-mpi.org>
> Subject: Re: [OMPI users] MPI_IN_PLACE
>
> On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
>
>> call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
>> & MPI_COMM_WORLD,ierr)
On Mar 3, 2006, at 6:42 PM, Xiaoning (David) Yang wrote:
call MPI_REDUCE(mypi,pi,1,MPI_DOUBLE_PRECISION,MPI_SUM,0,
& MPI_COMM_WORLD,ierr)
Can I use MPI_IN_PLACE in the MPI_REDUCE call? If I can, how?
Thanks for any help!
MPI_IN_PLACE is an MPI-2 construct, and
On Mar 3, 2006, at 4:40 PM, Xiaoning (David) Yang wrote:
Does Open MPI supports MPI_IN_PLACE? Thanks.
Yes.
--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/
58 matches
Mail list logo