Re: [OMPI users] How to justify the use MPI codes on multicore systems/PCs?

2011-12-14 Thread Rayson Ho
There is a project called "MVAPICH2-GPU", which is developed by D. K.
Panda's research group at Ohio State University. You will find lots of
references on Google... and I just briefly gone through the slides of
"MVAPICH2-­GPU: Optimized GPU to GPU Communication for InfiniBand
Clusters"":

http://nowlab.cse.ohio-state.edu/publications/conf-presentations/2011/hao-isc11-slides.pdf

It takes advantage of CUDA 4.0's Unified Virtual Addressing (UVA) to
pipeline & optimize cudaMemcpyAsync() & RMDA transfers. (MVAPICH
1.8a1p1 also supports Device-Device, Device-Host, Host-Device
transfers.)

Open MPI also supports similar functionality, but as OpenMPI is not an
academic project, there are less academic papers documenting the
internals of the latest developments (not saying that it's bad - many
products are not academic in nature and thus have less published
papers...)

Rayson

=
Grid Engine / Open Grid Scheduler
http://gridscheduler.sourceforge.net/

Scalable Grid Engine Support Program
http://www.scalablelogic.com/


On Mon, Dec 12, 2011 at 11:40 AM, Durga Choudhury  wrote:
> I think this is a *great* topic for discussion, so let me throw some
> fuel to the fire: the mechanism described in the blog (that makes
> perfect sense) is fine for (N)UMA shared memory architectures. But
> will it work for asymmetric architectures such as the Cell BE or
> discrete GPUs where the data between the compute nodes have to be
> explicitly DMA'd in? Is there a middleware layer that makes it
> transparent to the upper layer software?
>
> Best regards
> Durga
>
> On Mon, Dec 12, 2011 at 11:00 AM, Rayson Ho  wrote:
>> On Sat, Dec 10, 2011 at 3:21 PM, amjad ali  wrote:
>>> (2) The latest MPI implementations are intelligent enough that they use some
>>> efficient mechanism while executing MPI based codes on shared memory
>>> (multicore) machines.  (please tell me any reference to quote this fact).
>>
>> Not an academic paper, but from a real MPI library developer/architect:
>>
>> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport/
>> http://blogs.cisco.com/performance/shared-memory-as-an-mpi-transport-part-2/
>>
>> Open MPI is used by Japan's K computer (current #1 TOP 500 computer)
>> and LANL's RoadRunner (#1 Jun 08 – Nov 09), and "10^16 Flops Can't Be
>> Wrong" and "10^15 Flops Can't Be Wrong":
>>
>> http://www.open-mpi.org/papers/sc-2008/jsquyres-cisco-booth-talk-2up.pdf
>>
>> Rayson
>>
>> =
>> Grid Engine / Open Grid Scheduler
>> http://gridscheduler.sourceforge.net/
>>
>> Scalable Grid Engine Support Program
>> http://www.scalablelogic.com/
>>
>>
>>>
>>>
>>> Please help me in formally justifying this and comment/modify above two
>>> justifications. Better if I you can suggent me to quote some reference of
>>> any suitable publication in this regard.
>>>
>>> best regards,
>>> Amjad Ali
>>>
>>> ___
>>> Beowulf mailing list, beow...@beowulf.org sponsored by Penguin Computing
>>> To change your subscription (digest mode or unsubscribe) visit
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>>
>>
>>
>>
>> --
>> Rayson
>>
>> ==
>> Open Grid Scheduler - The Official Open Source Grid Engine
>> http://gridscheduler.sourceforge.net/
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



-- 
Rayson

==
Open Grid Scheduler - The Official Open Source Grid Engine
http://gridscheduler.sourceforge.net/



Re: [OMPI users] MPI 2 support in sm btl

2011-12-14 Thread Ralph Castain

On Dec 14, 2011, at 1:26 PM, Sabela Ramos Garea wrote:

> Hello,
> 
> As far as I know, there is no support for some MPI-2 features in the shared 
> memory BTL as dynamic process creation or port connection. Are you planning 
> to include this support?

It depends on what exactly you mean. Dynamically spawned processes do use the 
shared memory BTL between themselves, but not with the processes in their 
parent job. We plan to support that as well at some future time.

> 
> Thank you.
> 
> Sabela Ramos.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Jeff Squyres
On Dec 14, 2011, at 3:48 PM, Prentice Bisbal wrote:

> I realized this after I wrote that and clarified it in a subsequent e-mail. 
> Which you probably just read. ;-)

After I sent the mail, I saw it.  Oops.  :-)

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Prentice Bisbal

On 12/14/2011 03:39 PM, Jeff Squyres wrote:
> On Dec 14, 2011, at 3:21 PM, Prentice Bisbal wrote:
>
>> For example, your configure command,
>>
>> ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
>>
>> Doesn't tell Open MPI to use ifcort for mpif90 and mpif77.
> Actually, that's not correct.
>
> For Open MPI, our wrapper compilers will default to using the same compilers 
> that were used to build Open MPI.  So in the above case:
>
> mpicc will use gcc
> mpicxx will use g++
> mpif77 will use ifort
> mpif90 will use ifort
>
>

Jeff,

I realized this after I wrote that and clarified it in a subsequent
e-mail. Which you probably just read. ;-)

Prentice


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Prentice Bisbal
On 12/14/2011 03:29 PM, Micah Sklut wrote:
> Okay thanks Prentice.
>
> I understand what you are saying about specifying the compilers during
> configure.
> Perhaps, that alone would have solved the problem, but removing the
> 1.4.2 ompi installation worked as well.
>
> Micah
>

Well, to clarify my earlier statement, those compilers used during
installation are used to set the defaults in the wrapper files
(mpif90-wrapper--data.txt, etc.), but those
can easily be changed, either by editing those files, or by defining
environment variables.

Anywhow, we're all glad you were finally able to solve your problem.

--
Prentice




Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Micah Sklut
Okay thanks Prentice.

I understand what you are saying about specifying the compilers during
configure.
Perhaps, that alone would have solved the problem, but removing the 1.4.2
ompi installation worked as well.

Micah

On Wed, Dec 14, 2011 at 3:24 PM, Prentice Bisbal  wrote:

>
> On 12/14/2011 01:20 PM, Fernanda Oliveira wrote:
> > Hi Micah,
> >
> > I do not know if it is exactly what you need but I know that there are
> > environment variables to use with intel mpi. They are: I_MPI_CC,
> > I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
> > for bash, for instance or directly when you run.
> >
> > I use in my bashrc:
> >
> > export I_MPI_CC=icc
> > export I_MPI_CXX=icpc
> > export I_MPI_F77=ifort
> > export I_MPI_F90=ifort
>
> Those environment variables are for Intel MPI.  For OpenMPI, the
> equivalent variables would be OMPI_CC, OMPI_CXX, OMPI_F77, and OMPI_FC,
> respectively.
>
> --
> Prentice
> ___
> users mailing list
> us...@open-mpi.org
>
>


[OMPI users] MPI 2 support in sm btl

2011-12-14 Thread Sabela Ramos Garea
Hello,

As far as I know, there is no support for some MPI-2 features in the shared
memory BTL as dynamic process creation or port connection. Are you planning
to include this support?

Thank you.

Sabela Ramos.


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Prentice Bisbal

On 12/14/2011 01:20 PM, Fernanda Oliveira wrote:
> Hi Micah,
>
> I do not know if it is exactly what you need but I know that there are
> environment variables to use with intel mpi. They are: I_MPI_CC,
> I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
> for bash, for instance or directly when you run.
>
> I use in my bashrc:
>
> export I_MPI_CC=icc
> export I_MPI_CXX=icpc
> export I_MPI_F77=ifort
> export I_MPI_F90=ifort

Those environment variables are for Intel MPI.  For OpenMPI, the
equivalent variables would be OMPI_CC, OMPI_CXX, OMPI_F77, and OMPI_FC,
respectively.

--
Prentice


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Prentice Bisbal

On 12/14/2011 12:21 PM, Micah Sklut wrote:
> Hi Gustav,
>
> I did read Price's email:
>
> When I do "which mpif90", i get:
> /opt/openmpi/intel/bin/mpif90
> which is the desired directory/binary
>
> As I mentioned, the config log file indicated it was using ifort, and
> had no mention of gfortran.
> Below is the output from ompi_info. It shows reference to the correct
> ifort compiler. But, yet the mpif90 compiler, still yeilds a gfortran
> compiler.

Micah,

You are confusing the compilers users to build Open MPI  itself with the
compilers used by Open MPI to compile other codes with the proper build
environment.

For example, your configure command,

./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort

Doesn't tell Open MPI to use ifcort for mpif90 and mpif77. It tell the
build process to use ifort to compile the Fortran sections of the Open
MPI source code. To tell mpif90 and mpif77 which compilers you'd like to
use to compile Fortran programs that use Open MPI, you must set the
environment variables OMPI_F77 and OMPI_F90. To illustrate, when I want
to use the gnu compilers, I set the following in my .bashrc:

export OMPI_CC=gcc
export OMPI_CXX=g++
export OMPI_F77=gfortran
export OMPI_FC=gfortran

If I wanted to use Intel compilers, swap the above 4 lines for this:

export OMPI_CC=pgcc
export OMPI_CXX=pgCC
export OMPI_F77=pgf77
export OMPI_FC=pgf95

You can verify which compiler is set using the --showme switch to mpif90:

$ mpif90 --showme
pgf95 -I/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/include
-I/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/lib -L/usr/lib64
-L/usr/local/openmpi-1.2.8/pgi/x86_64/lib
-L/usr/local/openmpi-1.2.8/pgi-8.0/x86_64/lib -lmpi_f90 -lmpi_f77 -lmpi
-lopen-rte -lopen-pal -libverbs -lrt -lnuma -ldl -Wl,--export-dynamic
-lnsl -lutil -lpthread -ldl

I suspect if you run the command ' env | grep OMPI_FC', you'll see that
you have it set to gfortran. I can verify that mine is set to pgf97 this
way:

$ env | grep OMPI_FC
OMPI_FC=pgf95

Of course, a simple echo would work, too:

$ echo $OMPI_FC
pgf95

You can also change these setting by editing the file
mpif90-wrapper-data.txt in your Open MPI installation directory.

Full details on setting these variables (and others) can be found in the
FAQ:

http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0

--
Prentice



> -->
> barells@ip-10-17-153-123:~> ompi_info
>  Package: Open MPI barells@ip-10-17-148-204 Distribution
> Open MPI: 1.4.4
>Open MPI SVN revision: r25188
>Open MPI release date: Sep 27, 2011
> Open RTE: 1.4.4
>Open RTE SVN revision: r25188
>Open RTE release date: Sep 27, 2011
> OPAL: 1.4.4
>OPAL SVN revision: r25188
>OPAL release date: Sep 27, 2011
> Ident string: 1.4.4
>   Prefix: /usr/lib64/mpi/gcc/openmpi
>  Configured architecture: x86_64-unknown-linux-gnu
>   Configure host: ip-10-17-148-204
>Configured by: barells
>Configured on: Wed Dec 14 14:22:43 UTC 2011
>   Configure host: ip-10-17-148-204
> Built by: barells
> Built on: Wed Dec 14 14:27:56 UTC 2011
>   Built host: ip-10-17-148-204
>   C bindings: yes
> C++ bindings: yes
>   Fortran77 bindings: yes (all)
>   Fortran90 bindings: yes
>  Fortran90 bindings size: small
>   C compiler: gcc
>  C compiler absolute: /usr/bin/gcc
> C++ compiler: g++
>C++ compiler absolute: /usr/bin/g++
>   Fortran77 compiler: ifort
>   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>   Fortran90 compiler: ifort
>   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>  C profiling: yes
>C++ profiling: yes
>  Fortran77 profiling: yes
>  Fortran90 profiling: yes
>   C++ exceptions: no
>   Thread support: posix (mpi: no, progress: no)
>Sparse Groups: no
>   Internal debug support: no
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>  libltdl support: yes
>Heterogeneous support: no
>  mpirun default --prefix: no
>  MPI I/O support: yes
>MPI_WTIME support: gettimeofday
> Symbol visibility support: yes
>FT Checkpoint support: no  (checkpoint thread: no)
>MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)
>   MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2)
>MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
>MCA carto: auto_detect (MCA v2.0, API v2.0, Component
> v1.4.2)
>MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)
>MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
>MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)
>  MCA installdirs: env (MCA v2.0, API v2.0, 

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Micah Sklut
I uninstalled 1.4.2 with rpm -e ompi, and now my existing mpi binaries are
working.

Thanks so much for everyone's help.

On Wed, Dec 14, 2011 at 3:12 PM, Tim Prince  wrote:

> On 12/14/2011 12:52 PM, Micah Sklut wrote:
>
>> Hi Gustavo,
>>
>> Here is the output of :
>> barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
>> gfortran -I/usr/lib64/mpi/gcc/openmpi/**include -pthread
>> -I/usr/lib64/mpi/gcc/openmpi/**lib64 -L/usr/lib64/mpi/gcc/openmpi/**lib64
>> -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl
>> -Wl,--export-dynamic -lnsl -lutil -lm -ldl
>>
>> This points to gfortran.
>>
>> I do see what you are saying about the 1.4.2 and 1.4.4 components.
>> I'm not sure why that is, but there seems to be some conflict with the
>> existing openmpi, before recently installed 1.4.4 and trying to install
>> with ifort.
>>
>>  This is one of the reasons for recommending complete removal (rpm -e if
> need be) of any MPI which is on a default path (and setting a clean path)
> before building a new one, as well as choosing a unique install path for
> the new one.
>
> --
> Tim Prince
>
> __**_
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/**mailman/listinfo.cgi/users
>



-- 
Micah Sklut


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Tim Prince

On 12/14/2011 12:52 PM, Micah Sklut wrote:

Hi Gustavo,

Here is the output of :
barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
-I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl
-Wl,--export-dynamic -lnsl -lutil -lm -ldl

This points to gfortran.

I do see what you are saying about the 1.4.2 and 1.4.4 components.
I'm not sure why that is, but there seems to be some conflict with the
existing openmpi, before recently installed 1.4.4 and trying to install
with ifort.

This is one of the reasons for recommending complete removal (rpm -e if 
need be) of any MPI which is on a default path (and setting a clean 
path) before building a new one, as well as choosing a unique install 
path for the new one.


--
Tim Prince


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Tim Prince

On 12/14/2011 1:20 PM, Fernanda Oliveira wrote:

Hi Micah,

I do not know if it is exactly what you need but I know that there are
environment variables to use with intel mpi. They are: I_MPI_CC,
I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
for bash, for instance or directly when you run.

I use in my bashrc:

export I_MPI_CC=icc
export I_MPI_CXX=icpc
export I_MPI_F77=ifort
export I_MPI_F90=ifort


Let me know if it helps.
Fernanda Oliveira




I didn't see any indication that Intel MPI was in play here.  Of course, 
that's one of the first thoughts, as under Intel MPI,

mpif90 uses gfortran
mpiifort uses ifort
mpicc uses gcc
mpiCC uses g++
mpiicc uses icc
mpiicpc uses icpc
and all the Intel compilers use g++ to find headers and libraries.
The advice to try 'which mpif90' would show whether you fell into this 
bunker.
If you use Intel cluster checker, you will see noncompliance if anyone's 
MPI is on the default paths.  You must set paths explicitly according to 
the MPI you want.  Admittedly, that tool didn't gain a high level of 
adoption.


--
Tim Prince


Re: [OMPI users] Error launching w/ 1.5.3 on IB mthca nodes

2011-12-14 Thread V. Ram
Open MPI InfiniBand gurus and/or Mellanox: could I please get some
assistance with this?  Any suggestions on tunables or debugging
parameters to try?

Thank you very much.

On Mon, Dec 12, 2011, at 10:42 AM, V. Ram wrote:
> Hello,
> 
> We are running a cluster that has a good number of older nodes with
> Mellanox IB HCAs that have the "mthca" device name ("ib_mthca" kernel
> module).
> 
> These adapters are all at firmware level 4.8.917 .
> 
> The Open MPI in use is 1.5.3 , kernel 2.6.39 , x86-64.  Jobs are
> launched/managed using Slurm 2.2.7.  The IB software and drivers
> correspond to OFED 1.5.3.2 , and I've verified that the kernel modules
> in use are all from this OFED version.
> 
> On nodes with the mthca hardware *only*, we get frequent, but
> intermittent job startup failures, with messages like:
> 
> /
> 
> [[19373,1],54][btl_openib_component.c:3320:handle_wc] from compute-c3-07
> to: compute-c3-01 error polling LP CQ with status RECEIVER NOT READY
> RETRY EXCEEDED ERROR status
> number 13 for wr_id 2a25c200 opcode 128 vendor error 135 qp_idx 0
> 
> --
> The OpenFabrics "receiver not ready" retry count on a per-peer
> connection between two MPI processes has been exceeded.  In general,
> this should not happen because Open MPI uses flow control on per-peer
> connections to ensure that receivers are always ready when data is
> sent.
> 
> [further standard error text snipped...]
> 
> Below is some information about the host that raised the error and the
> peer to which it was connected:
> 
>   Local host:   compute-c3-07
>   Local device: mthca0
>   Peer host:compute-c3-01
> 
> You may need to consult with your system administrator to get this
> problem fixed.
> --
> 
> /
> 
> During these job runs, I have monitored the InfiniBand performance
> counters on the endpoints and switch.  No telltale counters for any of
> these ports change during these failed job initiations.
> 
> ibdiagnet works fine and properly enumerates the fabric and related
> performance counters, both from the affected nodes, as well as other
> nodes attached to the IB switch.  The IB connectivity itself seems fine
> from these nodes.
> 
> Other nodes with different HCAs use the same InfiniBand fabric
> continuously without any issue, so I don't think it's the fabric/switch.
> 
> I'm at a loss for what to do next to try and find the root cause of the
> issue.  I suspect something perhaps having to do with the mthca
> support/drivers, but how can I track this down further?
> 
> Thank you,
> 
> V. Ram.

-- 
http://www.fastmail.fm - Choose from over 50 domains or use your own



Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Jeff Squyres
On Dec 14, 2011, at 12:52 PM, Micah Sklut wrote:

> I do see what you are saying about the 1.4.2 and 1.4.4 components. 
> I'm not sure why that is, but there seems to be some conflict with the 
> existing openmpi, before recently installed 1.4.4 and trying to install with 
> ifort. 

Did you install 1.4.4 with ifort over a prior 1.4.2 installation that used 
gfortran?

Can you send the output from "make install"?  (please compress)

That should show exactly where the wrapper data file (that specifies things 
like gfortran vs. ifort) was installed.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] MPI_BCAST and fortran subarrays

2011-12-14 Thread Gustavo Correa
When it comes to intrinsic Fortran-90 functions, or to libraries provided by 
the compiler vendor
[e.g. MKL in the case of Intel], I do agree that they *should* be able to parse 
the array-section
notation and use the correct memory layout.

However, for libraries that are not part of Fortran-90, such as MPI, Lapack, 
FFTW, etc, etc,
which are designed, programmed, and built independently of Fortran-90,
I don't see how the F90 compiler can enforce them to comply with the 
array-section notation
and memory layout.
These libraries may or may not offer a fully Fortran-90 compliant API.
I think MPI doesn't.  
MPI provides an alternative mechanism through user-defined types, 
which is not specific to Fortran-90 programs either.
FFTW has yet another mechanism, not as broad, and focused on arrays/vectors.
Of course the compiler may provide a workaround 
with copy  in/out to/from memory-contiguous temporary arrays, as the Intel 
compiler does.
I wouldn't call it a compiler bug when the compiler doesn't provide this 
workaround.
However, of course the metaphysical nature of what is and is not a bug is 
certainly debatable. :)

Anyway, we seem to agree that this is not an MPI problem.
MPI provides its solution.
Some compilers provide an alternative solution.
One can chose whatever solution that is convenient to solve the problem at hand.

Gus Correa

On Dec 14, 2011, at 1:04 PM, David Warren wrote:

> Actually, sub array passing is part of the F90 standard (at least according 
> to every document I can find), and not an Intel extension. So if it doesn't 
> work you should complain to the compiler company. One of the reasons for 
> using it is that the compiler should be optimized for whatever method they 
> chose to use. As there are multiple options in the F90 standard for how 
> arrays get passed, it is not really a good idea to circumvent the official 
> method. Using user defined data types is great as long as the compiler 
> chooses to do a simple pointer pass, however if they use the copy in/out 
> option you will be making much larger temporary arrays than if you just pass 
> the correct subarray. Anyway, this is not really an MPI issue as much as an 
> F90 bug in your compiler.
> 
> On 12/14/11 08:57, Gustavo Correa wrote:
>> Hi Patrick
>> 
>> > From my mere MPI and Fortran-90 user point of view,
>> I think that the solution offered by the MPI standard [at least up to MPI-2]
>> to address the problem of non-contiguous memory layouts is to use MPI 
>> user-defined types,
>> as I pointed out in my previous email.
>> I like this solution because it is portable and doesn't require the 
>> allocation of
>> temporary arrays, and the additional programming effort is not that big.
>> 
>> As far as I know, MPI doesn't parse or comply with the Fortran-90
>> array-section notation and syntax.  All buffers in the MPI calls are 
>> pointers/addresses to the
>> first element on the buffer, which will  be tracked according to the number 
>> of elements passed
>> to the MPI call, and according to the MPI type passed to the MPI routine 
>> [which should be
>> a user-defined type, if you need to implement a fancy memory layout].
>> 
>> That MPI doesn't understand Fortran-90 array-sections doesn't surprise me so 
>> much.
>> I think Lapack doesn't do it either, and many other legitimate Fortran 
>> libraries don't
>> 'understand' array-sections either.
>> FFTW, for instance, goes a long way do define its own mechanism to
>> specify fancy memory layouts independently of the Fortran-90 array-section 
>> notation.
>> Amongst the libraries with Fortran interfaces that I've used, MPI probably 
>> provides the most
>> flexible and complete mechanism to describe memory layout, through 
>> user-defined types.
>> In your case I think the work required to declare a MPI_TYPE_VECTOR to 
>> handle your
>> table 'tab' is not really big or complicated.
>> 
>> As two other list subscribers mentioned, and you already tried,
>> the Intel compiler seems to offer an extension
>> to deal with this, and shortcut the use of MPI user-defined types.
>> This Intel compiler extension apparently uses under the hood the same idea 
>> of a
>> temporary array that you used programatically in one of the 'bide' program 
>> versions
>> that you sent in your original message.
>> The temporary array is used to ship data to/from contiguous/non-contiguous 
>> memory before/after the MPI call is invoked.
>> I presume this Intel compiler extension would work with libraries other than 
>> MPI,
>> whenever the library doesn't understand the Fortran-90 array-section 
>> notation.
>> I never used this extension, though.
>> For one thing, this solution may not be portable to other compilers.
>> Another aspect to consider is how much 'under the hood memory allocation' 
>> this solution
>> would require if the array you pass to MPI_BCAST is really big,
>> and how much this may impact performance.
>> 
>> I hope this helps,
>> Gus Correa
>> 
>> On Dec 14, 2011, at 11:03 AM, 

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Fernanda Oliveira
Hi Micah,

I do not know if it is exactly what you need but I know that there are
environment variables to use with intel mpi. They are: I_MPI_CC,
I_MPI_CXX, I_MPI_F77, I_MPI_F90. So, you can set this using 'export'
for bash, for instance or directly when you run.

I use in my bashrc:

export I_MPI_CC=icc
export I_MPI_CXX=icpc
export I_MPI_F77=ifort
export I_MPI_F90=ifort


Let me know if it helps.
Fernanda Oliveira


2011/12/14 Micah Sklut :
> Hi Gustavo,
>
> Here is the output of :
> barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
> gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
> -I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
> -lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic
> -lnsl -lutil -lm -ldl
>
> This points to gfortran.
>
> I do see what you are saying about the 1.4.2 and 1.4.4 components.
> I'm not sure why that is, but there seems to be some conflict with the
> existing openmpi, before recently installed 1.4.4 and trying to install with
> ifort.
>
>
> On Wed, Dec 14, 2011 at 12:43 PM, Gustavo Correa 
> wrote:
>>
>> How about the output of this?
>>
>> /opt/openmpi/intel/bin/mpif90 -showme
>>
>> Anyway, something seems to be wrong with your OpenMPI installation.
>> Just read the output of your ompi_info in your email below.
>> You will see that the OpenMPI version is 1.4.4.
>> However, most components are version 1.4.2.
>> Do you agree?
>>
>> I would download the OpenMPI 1.4.4 tarball again and start fresh.
>> Untar the tarball in a brand new directory, don't overwrite old stuff.
>> Also, every time your OpenMPI build fails, or if you want to change
>> compilers
>> [say from gfortran to ifort],
>> do a 'make distclean' to cleanup any leftovers of previous builds,
>> and change the destination directory in --prefix= , to install in a
>> different location.
>>
>> I hope this helps,
>> Gus Correa
>>
>> On Dec 14, 2011, at 12:21 PM, Micah Sklut wrote:
>>
>> > Hi Gustav,
>> >
>> > I did read Price's email:
>> >
>> > When I do "which mpif90", i get:
>> > /opt/openmpi/intel/bin/mpif90
>> > which is the desired directory/binary
>> >
>> > As I mentioned, the config log file indicated it was using ifort, and
>> > had no mention of gfortran.
>> > Below is the output from ompi_info. It shows reference to the correct
>> > ifort compiler. But, yet the mpif90 compiler, still yeilds a gfortran
>> > compiler.
>> > -->
>> > barells@ip-10-17-153-123:~> ompi_info
>> >                  Package: Open MPI barells@ip-10-17-148-204 Distribution
>> >                 Open MPI: 1.4.4
>> >    Open MPI SVN revision: r25188
>> >    Open MPI release date: Sep 27, 2011
>> >                 Open RTE: 1.4.4
>> >    Open RTE SVN revision: r25188
>> >    Open RTE release date: Sep 27, 2011
>> >                     OPAL: 1.4.4
>> >        OPAL SVN revision: r25188
>> >        OPAL release date: Sep 27, 2011
>> >             Ident string: 1.4.4
>> >                   Prefix: /usr/lib64/mpi/gcc/openmpi
>> >  Configured architecture: x86_64-unknown-linux-gnu
>> >           Configure host: ip-10-17-148-204
>> >            Configured by: barells
>> >            Configured on: Wed Dec 14 14:22:43 UTC 2011
>> >           Configure host: ip-10-17-148-204
>> >                 Built by: barells
>> >                 Built on: Wed Dec 14 14:27:56 UTC 2011
>> >               Built host: ip-10-17-148-204
>> >               C bindings: yes
>> >             C++ bindings: yes
>> >       Fortran77 bindings: yes (all)
>> >       Fortran90 bindings: yes
>> >  Fortran90 bindings size: small
>> >               C compiler: gcc
>> >      C compiler absolute: /usr/bin/gcc
>> >             C++ compiler: g++
>> >    C++ compiler absolute: /usr/bin/g++
>> >       Fortran77 compiler: ifort
>> >   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>> >       Fortran90 compiler: ifort
>> >   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>> >              C profiling: yes
>> >            C++ profiling: yes
>> >      Fortran77 profiling: yes
>> >      Fortran90 profiling: yes
>> >           C++ exceptions: no
>> >           Thread support: posix (mpi: no, progress: no)
>> >            Sparse Groups: no
>> >   Internal debug support: no
>> >      MPI parameter check: runtime
>> > Memory profiling support: no
>> > Memory debugging support: no
>> >          libltdl support: yes
>> >    Heterogeneous support: no
>> >  mpirun default --prefix: no
>> >          MPI I/O support: yes
>> >        MPI_WTIME support: gettimeofday
>> > Symbol visibility support: yes
>> >    FT Checkpoint support: no  (checkpoint thread: no)
>> >            MCA backtrace: execinfo (MCA v2.0, API v2.0, Component
>> > v1.4.2)
>> >               MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component
>> > v1.4.2)
>> >            MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
>> >                MCA carto: auto_detect (MCA v2.0, API v2.0, Component
>> > 

Re: [OMPI users] MPI_BCAST and fortran subarrays

2011-12-14 Thread David Warren
Actually, sub array passing is part of the F90 standard (at least 
according to every document I can find), and not an Intel extension. So 
if it doesn't work you should complain to the compiler company. One of 
the reasons for using it is that the compiler should be optimized for 
whatever method they chose to use. As there are multiple options in the 
F90 standard for how arrays get passed, it is not really a good idea to 
circumvent the official method. Using user defined data types is great 
as long as the compiler chooses to do a simple pointer pass, however if 
they use the copy in/out option you will be making much larger temporary 
arrays than if you just pass the correct subarray. Anyway, this is not 
really an MPI issue as much as an F90 bug in your compiler.


On 12/14/11 08:57, Gustavo Correa wrote:

Hi Patrick

> From my mere MPI and Fortran-90 user point of view,
I think that the solution offered by the MPI standard [at least up to MPI-2]
to address the problem of non-contiguous memory layouts is to use MPI 
user-defined types,
as I pointed out in my previous email.
I like this solution because it is portable and doesn't require the allocation 
of
temporary arrays, and the additional programming effort is not that big.

As far as I know, MPI doesn't parse or comply with the Fortran-90
array-section notation and syntax.  All buffers in the MPI calls are 
pointers/addresses to the
first element on the buffer, which will  be tracked according to the number of 
elements passed
to the MPI call, and according to the MPI type passed to the MPI routine [which 
should be
a user-defined type, if you need to implement a fancy memory layout].

That MPI doesn't understand Fortran-90 array-sections doesn't surprise me so 
much.
I think Lapack doesn't do it either, and many other legitimate Fortran 
libraries don't
'understand' array-sections either.
FFTW, for instance, goes a long way do define its own mechanism to
specify fancy memory layouts independently of the Fortran-90 array-section 
notation.
Amongst the libraries with Fortran interfaces that I've used, MPI probably 
provides the most
flexible and complete mechanism to describe memory layout, through user-defined 
types.
In your case I think the work required to declare a MPI_TYPE_VECTOR to handle 
your
table 'tab' is not really big or complicated.

As two other list subscribers mentioned, and you already tried,
the Intel compiler seems to offer an extension
to deal with this, and shortcut the use of MPI user-defined types.
This Intel compiler extension apparently uses under the hood the same idea of a
temporary array that you used programatically in one of the 'bide' program 
versions
that you sent in your original message.
The temporary array is used to ship data to/from contiguous/non-contiguous 
memory before/after the MPI call is invoked.
I presume this Intel compiler extension would work with libraries other than 
MPI,
whenever the library doesn't understand the Fortran-90 array-section notation.
I never used this extension, though.
For one thing, this solution may not be portable to other compilers.
Another aspect to consider is how much 'under the hood memory allocation' this 
solution
would require if the array you pass to MPI_BCAST is really big,
and how much this may impact performance.

I hope this helps,
Gus Correa

On Dec 14, 2011, at 11:03 AM, Patrick Begou wrote:

   

Thanks all for your anwers. yes, I understand well that it is a non contiguous 
memory access problem as the MPI_BCAST should wait for a pointer on a valid 
memory  zone. But I'm surprised that with the MPI module usage Fortran does not 
hide this discontinuity in a contiguous temporary copy of the array. I've spent 
some time to build openMPI with g++/gcc/ifort (to create the right mpi module) 
and ran some additional tests:


Default OpenMPI is openmpi-1.2.8-17.4.x86_64

# module load openmpi
# mpif90 ess.F90&&  mpirun -np 4 ./a.out
0   1   2   3   0   1   
2   3   0   1   2   3   0   
1   2   3
# module unload openmpi
The result is Ok but sometime it hangs (when I require are a lot of processes)

With OpenMPI 1.4.4 and gfortran from gcc-fortran-4.5-19.1.x86_64

# module load openmpi-1.4.4-gcc-gfortran
# mpif90 ess.F90&&  mpirun -np 4 ./a.out
0  -1  -1  -1   0  -1   
   -1  -1   0  -1  -1  -1   0   
   -1  -1  -1
# module unload openmpi-1.4.4-gcc-gfortran
Node 0 only update the global array with it's subarray. (i only print node 0 
result)


With OpenMPI 1.4.4 and ifort 10.1.018 (yes, it's quite old, I have the latest 
one but it isn't installed!)

# module load openmpi-1.4.4-gcc-intel
# mpif90 ess.F90&&  mpirun -np 4 ./a.out
ess.F90(15): (col. 5) remark: LOOP WAS VECTORIZED.
0  -1  -1   

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Micah Sklut
Hi Gustavo,

Here is the output of :
barells@ip-10-17-153-123:~> /opt/openmpi/intel/bin/mpif90 -showme
gfortran -I/usr/lib64/mpi/gcc/openmpi/include -pthread
-I/usr/lib64/mpi/gcc/openmpi/lib64 -L/usr/lib64/mpi/gcc/openmpi/lib64
-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal -ldl -Wl,--export-dynamic
-lnsl -lutil -lm -ldl

This points to gfortran.

I do see what you are saying about the 1.4.2 and 1.4.4 components.
I'm not sure why that is, but there seems to be some conflict with the
existing openmpi, before recently installed 1.4.4 and trying to install
with ifort.


On Wed, Dec 14, 2011 at 12:43 PM, Gustavo Correa wrote:

> How about the output of this?
>
> /opt/openmpi/intel/bin/mpif90 -showme
>
> Anyway, something seems to be wrong with your OpenMPI installation.
> Just read the output of your ompi_info in your email below.
> You will see that the OpenMPI version is 1.4.4.
> However, most components are version 1.4.2.
> Do you agree?
>
> I would download the OpenMPI 1.4.4 tarball again and start fresh.
> Untar the tarball in a brand new directory, don't overwrite old stuff.
> Also, every time your OpenMPI build fails, or if you want to change
> compilers
> [say from gfortran to ifort],
> do a 'make distclean' to cleanup any leftovers of previous builds,
> and change the destination directory in --prefix= , to install in a
> different location.
>
> I hope this helps,
> Gus Correa
>
> On Dec 14, 2011, at 12:21 PM, Micah Sklut wrote:
>
> > Hi Gustav,
> >
> > I did read Price's email:
> >
> > When I do "which mpif90", i get:
> > /opt/openmpi/intel/bin/mpif90
> > which is the desired directory/binary
> >
> > As I mentioned, the config log file indicated it was using ifort, and
> had no mention of gfortran.
> > Below is the output from ompi_info. It shows reference to the correct
> ifort compiler. But, yet the mpif90 compiler, still yeilds a gfortran
> compiler.
> > -->
> > barells@ip-10-17-153-123:~> ompi_info
> >  Package: Open MPI barells@ip-10-17-148-204 Distribution
> > Open MPI: 1.4.4
> >Open MPI SVN revision: r25188
> >Open MPI release date: Sep 27, 2011
> > Open RTE: 1.4.4
> >Open RTE SVN revision: r25188
> >Open RTE release date: Sep 27, 2011
> > OPAL: 1.4.4
> >OPAL SVN revision: r25188
> >OPAL release date: Sep 27, 2011
> > Ident string: 1.4.4
> >   Prefix: /usr/lib64/mpi/gcc/openmpi
> >  Configured architecture: x86_64-unknown-linux-gnu
> >   Configure host: ip-10-17-148-204
> >Configured by: barells
> >Configured on: Wed Dec 14 14:22:43 UTC 2011
> >   Configure host: ip-10-17-148-204
> > Built by: barells
> > Built on: Wed Dec 14 14:27:56 UTC 2011
> >   Built host: ip-10-17-148-204
> >   C bindings: yes
> > C++ bindings: yes
> >   Fortran77 bindings: yes (all)
> >   Fortran90 bindings: yes
> >  Fortran90 bindings size: small
> >   C compiler: gcc
> >  C compiler absolute: /usr/bin/gcc
> > C++ compiler: g++
> >C++ compiler absolute: /usr/bin/g++
> >   Fortran77 compiler: ifort
> >   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
> >   Fortran90 compiler: ifort
> >   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
> >  C profiling: yes
> >C++ profiling: yes
> >  Fortran77 profiling: yes
> >  Fortran90 profiling: yes
> >   C++ exceptions: no
> >   Thread support: posix (mpi: no, progress: no)
> >Sparse Groups: no
> >   Internal debug support: no
> >  MPI parameter check: runtime
> > Memory profiling support: no
> > Memory debugging support: no
> >  libltdl support: yes
> >Heterogeneous support: no
> >  mpirun default --prefix: no
> >  MPI I/O support: yes
> >MPI_WTIME support: gettimeofday
> > Symbol visibility support: yes
> >FT Checkpoint support: no  (checkpoint thread: no)
> >MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)
> >   MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component
> v1.4.2)
> >MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
> >MCA carto: auto_detect (MCA v2.0, API v2.0, Component
> v1.4.2)
> >MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)
> >MCA maffinity: first_use (MCA v2.0, API v2.0, Component
> v1.4.2)
> >MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)
> >  MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2)
> >  MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2)
> >  MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2)
> >   MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2)
> >MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2)
> >   

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Gustavo Correa
How about the output of this?

/opt/openmpi/intel/bin/mpif90 -showme

Anyway, something seems to be wrong with your OpenMPI installation.
Just read the output of your ompi_info in your email below.
You will see that the OpenMPI version is 1.4.4.
However, most components are version 1.4.2.
Do you agree?

I would download the OpenMPI 1.4.4 tarball again and start fresh.
Untar the tarball in a brand new directory, don't overwrite old stuff.
Also, every time your OpenMPI build fails, or if you want to change compilers 
[say from gfortran to ifort],
do a 'make distclean' to cleanup any leftovers of previous builds,
and change the destination directory in --prefix= , to install in a different 
location.

I hope this helps,
Gus Correa

On Dec 14, 2011, at 12:21 PM, Micah Sklut wrote:

> Hi Gustav, 
> 
> I did read Price's email: 
> 
> When I do "which mpif90", i get: 
> /opt/openmpi/intel/bin/mpif90
> which is the desired directory/binary
> 
> As I mentioned, the config log file indicated it was using ifort, and had no 
> mention of gfortran. 
> Below is the output from ompi_info. It shows reference to the correct ifort 
> compiler. But, yet the mpif90 compiler, still yeilds a gfortran compiler.
> -->
> barells@ip-10-17-153-123:~> ompi_info
>  Package: Open MPI barells@ip-10-17-148-204 Distribution
> Open MPI: 1.4.4
>Open MPI SVN revision: r25188
>Open MPI release date: Sep 27, 2011
> Open RTE: 1.4.4
>Open RTE SVN revision: r25188
>Open RTE release date: Sep 27, 2011
> OPAL: 1.4.4
>OPAL SVN revision: r25188
>OPAL release date: Sep 27, 2011
> Ident string: 1.4.4
>   Prefix: /usr/lib64/mpi/gcc/openmpi
>  Configured architecture: x86_64-unknown-linux-gnu
>   Configure host: ip-10-17-148-204
>Configured by: barells
>Configured on: Wed Dec 14 14:22:43 UTC 2011
>   Configure host: ip-10-17-148-204
> Built by: barells
> Built on: Wed Dec 14 14:27:56 UTC 2011
>   Built host: ip-10-17-148-204
>   C bindings: yes
> C++ bindings: yes
>   Fortran77 bindings: yes (all)
>   Fortran90 bindings: yes
>  Fortran90 bindings size: small
>   C compiler: gcc
>  C compiler absolute: /usr/bin/gcc
> C++ compiler: g++
>C++ compiler absolute: /usr/bin/g++
>   Fortran77 compiler: ifort
>   Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>   Fortran90 compiler: ifort
>   Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
>  C profiling: yes
>C++ profiling: yes
>  Fortran77 profiling: yes
>  Fortran90 profiling: yes
>   C++ exceptions: no
>   Thread support: posix (mpi: no, progress: no)
>Sparse Groups: no
>   Internal debug support: no
>  MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>  libltdl support: yes
>Heterogeneous support: no
>  mpirun default --prefix: no
>  MPI I/O support: yes
>MPI_WTIME support: gettimeofday
> Symbol visibility support: yes
>FT Checkpoint support: no  (checkpoint thread: no)
>MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)
>   MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2)
>MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
>MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.2)
>MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)
>MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
>MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)
>  MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2)
>  MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2)
>  MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2)
>   MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2)
>MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2)
>MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2)
> MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2)
>   MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2)
>MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2)
>MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2)
>MCA mpool: sm (MCA v2.0, API 

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Micah Sklut
Hi Gustav,

I did read Price's email:

When I do "which mpif90", i get:
/opt/openmpi/intel/bin/mpif90
which is the desired directory/binary

As I mentioned, the config log file indicated it was using ifort, and had
no mention of gfortran.
Below is the output from ompi_info. It shows reference to the correct ifort
compiler. But, yet the mpif90 compiler, still yeilds a gfortran compiler.
-->
barells@ip-10-17-153-123:~> ompi_info
 Package: Open MPI barells@ip-10-17-148-204 Distribution
Open MPI: 1.4.4
   Open MPI SVN revision: r25188
   Open MPI release date: Sep 27, 2011
Open RTE: 1.4.4
   Open RTE SVN revision: r25188
   Open RTE release date: Sep 27, 2011
OPAL: 1.4.4
   OPAL SVN revision: r25188
   OPAL release date: Sep 27, 2011
Ident string: 1.4.4
  Prefix: /usr/lib64/mpi/gcc/openmpi
 Configured architecture: x86_64-unknown-linux-gnu
  Configure host: ip-10-17-148-204
   Configured by: barells
   Configured on: Wed Dec 14 14:22:43 UTC 2011
  Configure host: ip-10-17-148-204
Built by: barells
Built on: Wed Dec 14 14:27:56 UTC 2011
  Built host: ip-10-17-148-204
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (all)
  Fortran90 bindings: yes
 Fortran90 bindings size: small
  C compiler: gcc
 C compiler absolute: /usr/bin/gcc
C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
  Fortran77 compiler: ifort
  Fortran77 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
  Fortran90 compiler: ifort
  Fortran90 compiler abs: /opt/intel/fce/9.1.040/bin/ifort
 C profiling: yes
   C++ profiling: yes
 Fortran77 profiling: yes
 Fortran90 profiling: yes
  C++ exceptions: no
  Thread support: posix (mpi: no, progress: no)
   Sparse Groups: no
  Internal debug support: no
 MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
 libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
 MPI I/O support: yes
   MPI_WTIME support: gettimeofday
Symbol visibility support: yes
   FT Checkpoint support: no  (checkpoint thread: no)
   MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.4.2)
  MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Component v1.4.2)
   MCA paffinity: linux (MCA v2.0, API v2.0, Component v1.4.2)
   MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.4.2)
   MCA carto: file (MCA v2.0, API v2.0, Component v1.4.2)
   MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.4.2)
   MCA timer: linux (MCA v2.0, API v2.0, Component v1.4.2)
 MCA installdirs: env (MCA v2.0, API v2.0, Component v1.4.2)
 MCA installdirs: config (MCA v2.0, API v2.0, Component v1.4.2)
 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.4.2)
  MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.4.2)
   MCA allocator: basic (MCA v2.0, API v2.0, Component v1.4.2)
   MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: basic (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: inter (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: self (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: sm (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: sync (MCA v2.0, API v2.0, Component v1.4.2)
MCA coll: tuned (MCA v2.0, API v2.0, Component v1.4.2)
  MCA io: romio (MCA v2.0, API v2.0, Component v1.4.2)
   MCA mpool: fake (MCA v2.0, API v2.0, Component v1.4.2)
   MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.4.2)
   MCA mpool: sm (MCA v2.0, API v2.0, Component v1.4.2)
 MCA pml: cm (MCA v2.0, API v2.0, Component v1.4.2)
 MCA pml: csum (MCA v2.0, API v2.0, Component v1.4.2)
 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.4.2)
 MCA pml: v (MCA v2.0, API v2.0, Component v1.4.2)
 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.4.2)
  MCA rcache: vma (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: ofud (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: openib (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: self (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: sm (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.4.2)
 MCA btl: udapl (MCA v2.0, API v2.0, Component v1.4.2)
MCA topo: unity (MCA v2.0, API v2.0, Component v1.4.2)
 MCA osc: pt2pt (MCA v2.0, API 

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Gustavo Correa
Hi Micah

Did you read Tim Prince's email to you?  Check it out.

Best thing is to set your environment variables [PATH, LD_LIBRARY_PATH, intel 
setup] 
in your initialization file, .profile/.bashrc or .[t]cshrc.

What is the output of 'ompi_info'? [From your ifort-built OpenMPI.]
Does it show ifort or gfortran?

I hope this helps,
Gus Correa

On Dec 14, 2011, at 11:21 AM, Micah Sklut wrote:

> Thanks for your thoughts, 
> 
> It would certainly appear that it is a PATH issue, but I still haven't 
> figured it out. 
> 
> When I type the ifort command, ifort does run. 
> The intel path is in my PATH and is the first directory listed. 
> 
> Looking at the configure.log, there is nothing indicating use or mentioning 
> of "gfortran".  
> 
> gfortran is in the /usr/bin directory, which is in the PATH as well. 
> 
> Any other suggestions of things to look for? 
> 
> Thank you, 
> 
> On Wed, Dec 14, 2011 at 11:05 AM, Gustavo Correa  
> wrote:
> Hi Micah
> 
> Is  ifort in your PATH?
> If not, the OpenMPI configure script will use any fortran compiler it finds 
> first, which may be gfortran.
> You need to run the Intel compiler startup script before you run the OpenMPI 
> configure.
> The easy thing to do is to source the Intel script inside your 
> .profile/.bashrc or .[t]cshrc file.
> I hope this helps,
> 
> Gus Correa
> 
> On Dec 14, 2011, at 9:49 AM, Micah Sklut wrote:
> 
> > Hi All,
> >
> > I have installed openmpi for gfortran, but am now attempting to install 
> > openmpi as ifort.
> >
> > I have run the following configuration:
> > ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
> >
> > The install works successfully, but when I run 
> > /opt/openmpi/intel/bin/mpif90, it runs as gfortran.
> > Oddly, when I am user: root, the same mpif90 runs as ifort.
> >
> > Can someone please alleviate my confusion as to why I mpif90 is not running 
> > as ifort?
> >
> > Thank you for your suggestions,
> >
> > --
> > Micah
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> -- 
> Micah Sklut
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI_BCAST and fortran subarrays

2011-12-14 Thread Gustavo Correa
Hi Patrick

>From my mere MPI and Fortran-90 user point of view,
I think that the solution offered by the MPI standard [at least up to MPI-2]
to address the problem of non-contiguous memory layouts is to use MPI 
user-defined types,
as I pointed out in my previous email.
I like this solution because it is portable and doesn't require the allocation 
of 
temporary arrays, and the additional programming effort is not that big.

As far as I know, MPI doesn't parse or comply with the Fortran-90 
array-section notation and syntax.  All buffers in the MPI calls are 
pointers/addresses to the
first element on the buffer, which will  be tracked according to the number of 
elements passed
to the MPI call, and according to the MPI type passed to the MPI routine [which 
should be
a user-defined type, if you need to implement a fancy memory layout].

That MPI doesn't understand Fortran-90 array-sections doesn't surprise me so 
much.
I think Lapack doesn't do it either, and many other legitimate Fortran 
libraries don't 
'understand' array-sections either.
FFTW, for instance, goes a long way do define its own mechanism to 
specify fancy memory layouts independently of the Fortran-90 array-section 
notation.
Amongst the libraries with Fortran interfaces that I've used, MPI probably 
provides the most
flexible and complete mechanism to describe memory layout, through user-defined 
types.
In your case I think the work required to declare a MPI_TYPE_VECTOR to handle 
your
table 'tab' is not really big or complicated.

As two other list subscribers mentioned, and you already tried, 
the Intel compiler seems to offer an extension
to deal with this, and shortcut the use of MPI user-defined types.
This Intel compiler extension apparently uses under the hood the same idea of a 
temporary array that you used programatically in one of the 'bide' program 
versions 
that you sent in your original message.
The temporary array is used to ship data to/from contiguous/non-contiguous 
memory before/after the MPI call is invoked.
I presume this Intel compiler extension would work with libraries other than 
MPI,
whenever the library doesn't understand the Fortran-90 array-section notation.
I never used this extension, though.
For one thing, this solution may not be portable to other compilers.
Another aspect to consider is how much 'under the hood memory allocation' this 
solution 
would require if the array you pass to MPI_BCAST is really big, 
and how much this may impact performance.

I hope this helps,
Gus Correa

On Dec 14, 2011, at 11:03 AM, Patrick Begou wrote:

> Thanks all for your anwers. yes, I understand well that it is a non 
> contiguous memory access problem as the MPI_BCAST should wait for a pointer 
> on a valid memory  zone. But I'm surprised that with the MPI module usage 
> Fortran does not hide this discontinuity in a contiguous temporary copy of 
> the array. I've spent some time to build openMPI with g++/gcc/ifort (to 
> create the right mpi module) and ran some additional tests:
> 
> 
> Default OpenMPI is openmpi-1.2.8-17.4.x86_64
> 
> # module load openmpi
> # mpif90 ess.F90 && mpirun -np 4 ./a.out
>0   1   2   3   0   1  
>  2   3   0   1   2   3   
> 0   1   2   3
> # module unload openmpi
> The result is Ok but sometime it hangs (when I require are a lot of processes)
> 
> With OpenMPI 1.4.4 and gfortran from gcc-fortran-4.5-19.1.x86_64
> 
> # module load openmpi-1.4.4-gcc-gfortran
> # mpif90 ess.F90 && mpirun -np 4 ./a.out
>0  -1  -1  -1   0  -1  
> -1  -1   0  -1  -1  -1   
> 0  -1  -1  -1
> # module unload openmpi-1.4.4-gcc-gfortran
> Node 0 only update the global array with it's subarray. (i only print node 0 
> result)
> 
> 
> With OpenMPI 1.4.4 and ifort 10.1.018 (yes, it's quite old, I have the latest 
> one but it isn't installed!)
> 
> # module load openmpi-1.4.4-gcc-intel
> # mpif90 ess.F90 && mpirun -np 4 ./a.out
> ess.F90(15): (col. 5) remark: LOOP WAS VECTORIZED.
>0  -1  -1  -1   0  -1
>   -1  -1   0  -1  -1  -1
>0  -1  -1  -1
> 
> # mpif90 -check arg_temp_created ess.F90 && mpirun -np 4 ./a.out
> gives a lot of messages like:
> forrtl: warning (402): fort: (1): In call to MPI_BCAST1DI4, an array 
> temporary was created for argument #1
> 
> So a temporary array is created for each call. So where is the problem ?
> 
> About the fortran compiler, I'm using similar behavior (non contiguous 
> subarrays) in MPI_sendrecv calls and all is working fine: I ran some 
> intensive tests from 1 to 128 processes on my quad-core workstation. This 
> Fortran solution was easier than creating user defined data types.
> 

Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Gustavo Correa
Hi Micah

Is  ifort in your PATH?
If not, the OpenMPI configure script will use any fortran compiler it finds 
first, which may be gfortran.
You need to run the Intel compiler startup script before you run the OpenMPI 
configure.
The easy thing to do is to source the Intel script inside your .profile/.bashrc 
or .[t]cshrc file.
I hope this helps,

Gus Correa

On Dec 14, 2011, at 9:49 AM, Micah Sklut wrote:

> Hi All, 
> 
> I have installed openmpi for gfortran, but am now attempting to install 
> openmpi as ifort. 
> 
> I have run the following configuration: 
> ./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort
> 
> The install works successfully, but when I run /opt/openmpi/intel/bin/mpif90, 
> it runs as gfortran. 
> Oddly, when I am user: root, the same mpif90 runs as ifort. 
> 
> Can someone please alleviate my confusion as to why I mpif90 is not running 
> as ifort? 
> 
> Thank you for your suggestions, 
> 
> -- 
> Micah
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Dmitry N. Mikushin
Dear Matthieu, Rolf,

Thank you!

But normally CUDA device selection is based on MPI process index. So,
cuda context must exist where MPI index is not yet available. What is
the best practice of process<->GPU mapping in this case? Or can I
select any device prior to MPI_Init and later change to another
device?

- D.

2011/12/14 Rolf vandeVaart :
> To add to this, yes, we recommend that the CUDA context exists prior to a
> call to MPI_Init.  That is because a CUDA context needs to exist prior to
> MPI_Init as the library attempts to register some internal buffers with the
> CUDA library that require a CUDA context exists already.  Note that this is
> only relevant if you plan to send and receive CUDA device memory directly
> from MPI calls.   There is a little more about this in the FAQ here.
>
>
>
> http://www.open-mpi.org/faq/?category=running#mpi-cuda-support
>
>
>
>
>
> Rolf
>
>
>
> From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
> Sent: Wednesday, December 14, 2011 10:47 AM
> To: Open MPI Users
> Cc: Rolf vandeVaart
> Subject: Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with
> unique GPU for each MPI process?
>
>
>
> Hi,
>
>
>
> Processes are not spawned by MPI_Init. They are spawned before by some
> applications between your mpirun call and when your program starts. When it
> does, you already have all MPI processes (you can check by adding a sleep or
> something like that), but they are not synchronized and do not know each
> other. This is what MPI_Init is used for.
>
>
>
> Matthieu Brucher
>
> 2011/12/14 Dmitry N. Mikushin 
>
> Dear colleagues,
>
> For GPU Winter School powered by Moscow State University cluster
> "Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
> capabilities of MPI. There is one strange warning I cannot understand:
> OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
> but how could it be? I thought processes are spawned during MPI_Init,
> and such context will be created on the very first root process. Why
> do we need existing CUDA context before MPI_Init? I think there was no
> such error in previous versions.
>
> Thanks,
> - D.
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
>
> 
> This email message is for the sole use of the intended recipient(s) and may
> contain confidential information.  Any unauthorized review, use, disclosure
> or distribution is prohibited.  If you are not the intended recipient,
> please contact the sender by reply email and destroy all copies of the
> original message.
> 
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Rolf vandeVaart
To add to this, yes, we recommend that the CUDA context exists prior to a call 
to MPI_Init.  That is because a CUDA context needs to exist prior to MPI_Init 
as the library attempts to register some internal buffers with the CUDA library 
that require a CUDA context exists already.  Note that this is only relevant if 
you plan to send and receive CUDA device memory directly from MPI calls.   
There is a little more about this in the FAQ here.

http://www.open-mpi.org/faq/?category=running#mpi-cuda-support


Rolf

From: Matthieu Brucher [mailto:matthieu.bruc...@gmail.com]
Sent: Wednesday, December 14, 2011 10:47 AM
To: Open MPI Users
Cc: Rolf vandeVaart
Subject: Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with 
unique GPU for each MPI process?

Hi,

Processes are not spawned by MPI_Init. They are spawned before by some 
applications between your mpirun call and when your program starts. When it 
does, you already have all MPI processes (you can check by adding a sleep or 
something like that), but they are not synchronized and do not know each other. 
This is what MPI_Init is used for.

Matthieu Brucher
2011/12/14 Dmitry N. Mikushin >
Dear colleagues,

For GPU Winter School powered by Moscow State University cluster
"Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
capabilities of MPI. There is one strange warning I cannot understand:
OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
but how could it be? I thought processes are spawned during MPI_Init,
and such context will be created on the very first root process. Why
do we need existing CUDA context before MPI_Init? I think there was no
such error in previous versions.

Thanks,
- D.
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



--
Information System Engineer, Ph.D.
Blog: http://matt.eifelle.com
LinkedIn: http://www.linkedin.com/in/matthieubrucher

---
This email message is for the sole use of the intended recipient(s) and may 
contain
confidential information.  Any unauthorized review, use, disclosure or 
distribution
is prohibited.  If you are not the intended recipient, please contact the 
sender by
reply email and destroy all copies of the original message.
---


Re: [OMPI users] openmpi - gfortran and ifort conflict

2011-12-14 Thread Tim Prince

On 12/14/2011 9:49 AM, Micah Sklut wrote:


I have installed openmpi for gfortran, but am now attempting to install
openmpi as ifort.

I have run the following configuration:
./configure --prefix=/opt/openmpi/intel CC=gcc CXX=g++ F77=ifort FC=ifort

The install works successfully, but when I run
/opt/openmpi/intel/bin/mpif90, it runs as gfortran.
Oddly, when I am user: root, the same mpif90 runs as ifort.

Can someone please alleviate my confusion as to why I mpif90 is not
running as ifort?



You might check your configure logs to be certain that ifort was found 
before gfortran at all stages (did you set paths according to sourcing 
the ifortvars or compilervars scripts which come with ifort?).
'which mpif90' should tell you whether you are executing the one from 
your installation.  You may have another mpif90 coming first on your 
PATH.  You won't be able to override your PATH and LD_LIBRARY_PATH 
correctly simply by specifying absolute path to mpif90.



--
Tim Prince


[OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?

2011-12-14 Thread Dmitry N. Mikushin
Dear colleagues,

For GPU Winter School powered by Moscow State University cluster
"Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
capabilities of MPI. There is one strange warning I cannot understand:
OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
but how could it be? I thought processes are spawned during MPI_Init,
and such context will be created on the very first root process. Why
do we need existing CUDA context before MPI_Init? I think there was no
such error in previous versions.

Thanks,
- D.


[OMPI users] Open MPI 1.5.4 on windows g95 / mpif90 support

2011-12-14 Thread Joao Amaral

Hi all,

I am trying to have a working mpif90 on my laptop PC (windows 7 64 
bits), so that I can develop/test fortran 90 MPI code before running it 
on a cluster.


I have tried the 1.5.4 installer on windows, cygwin, installed ubuntu, 
tried cygwin again, and now am back to the Open MPI 1.5.4 windows build.


Is it possible to use my existing g95 installation on windows so that I 
can compile fortran 90 mpi code?


These are the top lines from the output of the "ompi_info" command.

 Package: Open MPI hpcfan@VISCLUSTER26 Distribution
Open MPI: 1.5.4
   Open MPI SVN revision: r25060
   Open MPI release date: Aug 18, 2011
Open RTE: 1.5.4
   Open RTE SVN revision: r25060
   Open RTE release date: Aug 18, 2011
OPAL: 1.5.4
   OPAL SVN revision: r25060
   OPAL release date: Aug 18, 2011
Ident string: 1.5.4
  Prefix: C:\Program Files (x86)\OpenMPI_v1.5.4-x64
 Configured architecture: x86 Windows-6.1
  Configure host: VISCLUSTER26
   Configured by: hpcfan
   Configured on: 10:44 AM 08/19/2011
  Configure host: VISCLUSTER26
Built by: hpcfan
Built on: 10:44 AM 08/19/2011
  Built host: VISCLUSTER26
  C bindings: yes
C++ bindings: yes
  Fortran77 bindings: yes (caps)
  Fortran90 bindings: no
 Fortran90 bindings size: na
  C compiler: cl
 C compiler absolute: D:/MSDev10/VC/bin/amd64/cl.exe
  C compiler family name: MICROSOFT
  C compiler version: 1600
C++ compiler: cl
   C++ compiler absolute: D:/MSDev10/VC/bin/amd64/cl.exe
  Fortran77 compiler: ifort
  Fortran77 compiler abs: C:/Program Files
  (x86)/Intel/ComposerXE-2011/bin/amd64/ifort.exe
  Fortran90 compiler: none
  Fortran90 compiler abs: none
 C profiling: yes
   C++ profiling: yes
 Fortran77 profiling: yes
 Fortran90 profiling: no
  C++ exceptions: no
  Thread support: no
   Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: no
 MPI parameter check: never

(...)

Thanks for your help.