Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Hi, I have fixed the timing issue between the server and client, and now I could build Open MPI successfully. Here is the output of ompi_info [root@micrompi-2 ompi]# ompi_info Open MPI: 1.0a1r6760M Open MPI SVN revision: r6760M Open RTE: 1.0a1r6760M Open RTE SVN revision: r6760M OPAL: 1.0a1r6760M OPAL SVN revision: r6760M Prefix: /openmpi Configured architecture: x86_64-redhat-linux-gnu Configured by: root Configured on: Mon Aug 8 23:58:08 IST 2005 Configure host: micrompi-2 Built by: root Built on: Tue Aug 9 00:09:10 IST 2005 Built host: micrompi-2 C bindings: yes C++ bindings: yes Fortran77 bindings: yes (all) Fortran90 bindings: no C compiler: gcc C compiler absolute: /usr/bin/gcc C++ compiler: g++ C++ compiler absolute: /usr/bin/g++ Fortran77 compiler: g77 Fortran77 compiler abs: /usr/bin/g77 Fortran90 compiler: none Fortran90 compiler abs: none C profiling: yes C++ profiling: yes Fortran77 profiling: yes Fortran90 profiling: no C++ exceptions: no Thread support: posix (mpi: no, progress: no) Internal debug support: yes MPI parameter check: runtime Memory profiling support: yes Memory debugging support: yes libltdl support: 1 MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0) MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0) MCA coll: basic (MCA v1.0, API v1.0, Component v1.0) MCA coll: self (MCA v1.0, API v1.0, Component v1.0) MCA io: romio (MCA v1.0, API v1.0, Component v1.0) MCA mpool: mvapi (MCA v1.0, API v1.0, Component v1.0) MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0) MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.0) MCA pml: teg (MCA v1.0, API v1.0, Component v1.0) MCA pml: uniq (MCA v1.0, API v1.0, Component v1.0) MCA ptl: self (MCA v1.0, API v1.0, Component v1.0) MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0) MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0) MCA btl: mvapi (MCA v1.0, API v1.0, Component v1.0) MCA btl: self (MCA v1.0, API v1.0, Component v1.0) MCA btl: sm (MCA v1.0, API v1.0, Component v1.0) MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0) MCA topo: unity (MCA v1.0, API v1.0, Component v1.0) MCA gpr: null (MCA v1.0, API v1.0, Component v1.0) MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0) MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0) MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0) MCA iof: svc (MCA v1.0, API v1.0, Component v1.0) MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0) MCA ns: replica (MCA v1.0, API v1.0, Component v1.0) MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0) MCA ras: host (MCA v1.0, API v1.0, Component v1.0) MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.0) MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0) MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.0) MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0) MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0) MCA rml: oob (MCA v1.0, API v1.0, Component v1.0) MCA pls: fork (MCA v1.0, API v1.0, Component v1.0) MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0) MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0) MCA sds: env (MCA v1.0, API v1.0, Component v1.0) MCA sds: pipe (MCA v1.0, API v1.0, Component v1.0) MCA sds: seed (MCA v1.0, API v1.0, Component v1.0) MCA sds: singleton (MCA v1.0, API v1.0, Component v1.0) This time, I could see that btl mvapi component is built. But I am still seeing the same problem while running Pallas Benchmark i.e., I still see that the data is passing over TCP/GigE and NOT over Infiniband. I have disabled building OpenIB and to do so I have touched .ompi_ignore. This should not be a problem for MVAPI. I have run autogen.sh, configure and make all. The output of autogen.sh, configure and make all commands are <> gzip'ed in ompi_out.tar.gz file which is attached in this mail. This gzip file also contains the output of Pallas Benchmark results. At the end of Pallas Benchmark output, you can find the error Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193)
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
In THCA4, mpga and mtl_common libraries were removed and I think functionality got included in vapi and mosal libs. We don't have to treat as an error if mpga and mtl_common libs doesn't exist but we can treat as warning and whoever is tyring to configure they need to go and verify whether mpga and mtl_common libs should be included or not. Thanks -Sridhar From: devel-boun...@open-mpi.org on behalf of Jeff Squyres Sent: Mon 8/8/2005 7:53 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI The README there was just copied over from the beta branch, and there is no mVAPI or Open IB support on the beta branch (the docs usually lag the code base -- they'll get a thorough scrubbing before the formal release). As we talked about on the teleconference (I confess to not remembering if you were on it or not), Open MPI supports both mVAPI and Open IB. I'll go update some of the obvious errors (like this one). As for lmpga and lmtl_common, ok, I'll modify the configure scripts as appropriate. I'm assuming that you're telling me that these two libraries are optional...? I.e., if we find them, we should use them? But if not, it's not an error...? Is that correct? (I don't know what the purpose of these libraries are) On Aug 8, 2005, at 4:58 AM, Sridhar Chirravuri wrote: > > Hi, > > I have got the latest code drop and this time I didn't give -r option > to svn co. The last line that it showed me is given below. > > Checked out revision 6760. > > > I am trying to install/configure/build OpenMPI on RHEL3 update 4 > machine. For this release, we don't have lmpga and lmtl_common > libraries. We are not using separate VAPI libraries. We only use lvapi > and lmosal. We do have lmpga and lmtl_common libs but with the older > release. > > In the file README of latest check out, I could see the following lin > > - Support for Quadrics and Infiniband (both mVAPI and OpenIB) is > > missing (see the current code base). > > What does it mean? Does OpenMPI has support for Infiniband (mVAPI)? I > am not getting why btl mVAPI component is not being built (in my > previous mail with ompi_info output). Could you please let me know > whether OpenMPI has got support for Infiniband (mVAPI)? If yes, what > sort of configuration options that I need to give? Or Do I have to > modify anything in the respective directories? Please let me know. > > Thanks > > -Sridhar > > > -Original Message- > From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] > On Behalf Of Jeff Squyres > Sent: Friday, August 05, 2005 6:15 PM > To: Open MPI Developers > Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI > > On Aug 5, 2005, at 8:02 AM, Sridhar Chirravuri ((schirrav)) wrote: > > > Here is the output of ompi_info > > > > > > [root@micrompi-1 ompi]# ompi_info > > > Open MPI: 1.0a1r6612M > > >Open MPI SVN revision: r6612M > > > [snipped] > > > The OpenMPI version that I am using r6612 (I could see from the > output > > > ompi_info command). There is NO btl frame where as mpool was built. > > > > > > In the configure options, giving --with-btl-mvapi=/opt/topspin would > > > sufficient as it has got include and lib64 directories. Therefore it > > > will pick up the necessary things. Also, I have set the following > > > flags > > Good. > > > export CFLAGS="-I/optl/topspin/include -I/opt/topspin/include/vapi" > > > export LDFLAGS="-lmosal -lvapi -L/opt/topspin/lib64" > > > export btl_mvapi_LDFLAGS=$LDFLAGS > > > export btl_mvapi_LIBS=$LDFLAGS > > You shouldn't need these -- our configure script should figure all that > > out with just the --with-btl-mvapi switch. Let us know if it doesn't > > (an explicit goal of our configure script is to handle all this kind of > > complexity and do all the Right Things with a single --with switch). > > > I will configure and build the latest code. To get the latest code, I > > > have run the following command. Please let me know if I am not > > > correct. > > > > > > svn co -r6613 http://svn.open-mpi.org/svn/ompi/trunk ompi > > No -- do not specify the -r switch. That asks for a specific > > repository r number, and 6613 only 1 commit beyond your last version. > > The current r number at the HEAD is 6746, for example -- 6613 was > > committed around 9am on July 27th. Specifically, the r number > > represents a unique state of the *entire* repository. So every commit > > increments the r number (more Subversion documentation is available > > here: http://svnbook.red-bean.com/). > > I believe that in 6612 and 6613, we still had many of the 3rd > > generation BTL stuff .ompi_ignore'd out, so they would not have built > > (many were removed at 6616, but even more were removed as late as > > 6658). > > Note that the "M" in your version number means that you have locally > > modified the tree -- so you started with r6612, but then made local > > modificatio
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
On Aug 9, 2005, at 7:24 AM, Sridhar Chirravuri wrote: I have fixed the timing issue between the server and client, and now I could build Open MPI successfully. Good. Here is the output of ompi_info [root@micrompi-2 ompi]# ompi_info Open MPI: 1.0a1r6760M Note that as of this morning (US Eastern time), the current head is r6774. Also be wary of any local mods you have put in the tree (as noted by the "M"). Check "svn status" to see which files you have modified, and "svn diff" to see the exact changes. This time, I could see that btl mvapi component is built. But I am still seeing the same problem while running Pallas Benchmark i.e., I still see that the data is passing over TCP/GigE and NOT over Infiniband. Please note that the 2nd generation point-to-point implementation is still the default (where we have no IB support) -- all the IB support, both mVAPI and Open IB, is in the 3rd generation point-to-point implementation. You must explicitly request the 3rd generation point-to-point implementation at run time to get IB support. Check out slide 48, "Example: Forcing ob1/BTL" in the slides that we discussed on the teleconference (were you on the teleconference? I attached copies if you were not). The short version is that you need to tell Open MPI to use the "ob1" pml component (3rd gen), not the default "teg" pml component (2nd gen). We'll eventually make the 3rd gen stuff be the default, and likely remove all the 2nd gen stuff (i.e., definitely before release) -- we just haven't done it yet because Tim and Galen are still polishing up the 3rd gen stuff. I have disabled building OpenIB and to do so I have touched .ompi_ignore. This should not be a problem for MVAPI. If the Open IB headers / libraries are not located in compiler-known locations, then you shouldn't need to .ompi_ignore the tree (i.e., configure won't find the Open IB headers / libraries, and will therefore automatically skip those components). Again, it is our intention that users will neither know about nor have to touch files in the distribution -- they only need use appropriate options to "configure" and then "make". I'm not sure if we have explicit options to disable a component in configure -- Brian, can you comment here? I have run autogen.sh, configure and make all. The output of autogen.sh, configure and make all commands are <> gzip'ed in ompi_out.tar.gz file which is attached in this mail. This gzip file also contains the output of Pallas Benchmark results. At the end of Pallas Benchmark output, you can find the error Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) ..and Pallas just hung. I have no clue about the above errors which are coming from Open MPI source code. The 2nd generation component has fallen into some disrepair -- I'd try re-running with ob1 and see what happens. I have not seen such errors when running PMB before, but I can try running it again to see if we've broken something recently. Is there any thing that I am missing while building btl mvapi? Also, is anyone built for mvapi and tested this OMPI stack. Please let me know. Galen Shipman and Tim Woodall are doing all the IB work. -- {+} Jeff Squyres {+} The Open MPI Project {+} http://www.open-mpi.org/
[O-MPI devel] Memory registration question.
Hello, I am trying to understand how memory registration works in openMPI and I have a question. Does mca_mpool_base_(insert|insert) interface supports overlapping registrations? If one module register memory from 0 to 100 and another from 50 to 150 what mca_mpool_base_find(80) will return to ob1 module? -- Gleb.
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
On Aug 9, 2005, at 8:04 AM, Jeff Squyres wrote: On Aug 9, 2005, at 7:24 AM, Sridhar Chirravuri wrote: I have disabled building OpenIB and to do so I have touched .ompi_ignore. This should not be a problem for MVAPI. If the Open IB headers / libraries are not located in compiler-known locations, then you shouldn't need to .ompi_ignore the tree (i.e., configure won't find the Open IB headers / libraries, and will therefore automatically skip those components). Again, it is our intention that users will neither know about nor have to touch files in the distribution -- they only need use appropriate options to "configure" and then "make". I'm not sure if we have explicit options to disable a component in configure -- Brian, can you comment here? But of course - we can cook your breakfast as well ;). You can explicitly disable a component using the --enable-mca-no- build=NAME. I know the syntax is a bit wonky, but Autoconf doesn't make this easy :(. Since you want to disable Open IB, you might do something like: ./configure [rest of options] --enable-mca-no-build=bml-openib Does something bad happen if you try to build with the Open IB component enabled? If so, could you let us know what happens? Galen and I are trying to make sure we have all the BTL configure scripts tightened up before we release to the world. If it's just something about your test environment, no biggie - just want to make sure we don't have a real problem. Since you only are interested in the MVAPI port and therefore not at all interested in the TEG PML (the older generation of communication interface), you can build with: ./configure [rest of options] --enable-mca-no-build=bml-openib,pml- teg,pml-uniq which will cause the older generation PMLs to not be built. I think this is probably going to make life easiest on you. Is there any thing that I am missing while building btl mvapi? Also, is anyone built for mvapi and tested this OMPI stack. Please let me know. Galen Shipman and Tim Woodall are doing all the IB work. I think that most of your problems will go away once you start using the OB1 PML and therefore actually start using the MVAPI interface. I know that both Galen and I have been running a slightly modified copy of the Intel test suite against the MVAPI driver on a fairly regular basis. There are one or two minor datatype issues that still need to be worked out, but we should pass all the point to point tests. Hope this helps, Brian -- Brian Barrett Open MPI developer http://www.open-mpi.org/
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Hi, Does r6774 has lot of changes that are related to 3rd generation point-to-point? I am trying to run some benchmark tests (ex: pallas) with Open MPI stack and just want to compare the performance figures with MVAPICH 095 and MVAPICH 092. In order to use 3rd generation p2p communication, I have added the following line in the /openmpi/etc/openmpi-mca-params.conf pml=ob1 I also exported (as double check) OMPI_MCA_pml=ob1. Then, I have tried running on the same machine. My machine has got 2 processors. Mpirun -np 2 ./PMB-MPI1 I still see the following lines Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) In the output of ompi_info, I could see that ob1 component is being built. Do I have to configure any other stuff? Please let me know Thanks -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Jeff Squyres Sent: Tuesday, August 09, 2005 6:35 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 7:24 AM, Sridhar Chirravuri wrote: > I have fixed the timing issue between the server and client, and now I > could build Open MPI successfully. Good. > Here is the output of ompi_info > > [root@micrompi-2 ompi]# ompi_info > > Open MPI: 1.0a1r6760M Note that as of this morning (US Eastern time), the current head is r6774. Also be wary of any local mods you have put in the tree (as noted by the "M"). Check "svn status" to see which files you have modified, and "svn diff" to see the exact changes. > This time, I could see that btl mvapi component is built. > > But I am still seeing the same problem while running Pallas Benchmark > i.e., I still see that the data is passing over TCP/GigE and NOT over > Infiniband. Please note that the 2nd generation point-to-point implementation is still the default (where we have no IB support) -- all the IB support, both mVAPI and Open IB, is in the 3rd generation point-to-point implementation. You must explicitly request the 3rd generation point-to-point implementation at run time to get IB support. Check out slide 48, "Example: Forcing ob1/BTL" in the slides that we discussed on the teleconference (were you on the teleconference? I attached copies if you were not). The short version is that you need to tell Open MPI to use the "ob1" pml component (3rd gen), not the default "teg" pml component (2nd gen). We'll eventually make the 3rd gen stuff be the default, and likely remove all the 2nd gen stuff (i.e., definitely before release) -- we just haven't done it yet because Tim and Galen are still polishing up the 3rd gen stuff. > I have disabled building OpenIB and to do so I have touched > .ompi_ignore. This should not be a problem for MVAPI. If the Open IB headers / libraries are not located in compiler-known locations, then you shouldn't need to .ompi_ignore the tree (i.e., configure won't find the Open IB headers / libraries, and will therefore automatically skip those components). Again, it is our intention that users will neither know about nor have to touch files in the distribution -- they only need use appropriate options to "configure" and then "make". I'm not sure if we have explicit options to disable a component in configure -- Brian, can you comment here? > I have run autogen.sh, configure and make all. The output of > autogen.sh, configure and make all commands are <> > gzip'ed in ompi_out.tar.gz file which is attached in this mail. This > gzip file also contains the output of Pallas Benchmark results. At the > end of Pallas Benchmark output, you can find the error > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce_scatter.c, 79) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > Request for 0 bytes (coll_basic_reduce_scatter.c, 79) > > Request for 0 bytes (coll_basic_reduce.c, 193) > > ..and Pallas just hung. > > I have no clue about the above errors which are coming from Open MPI > source code. The 2nd generation component has fallen into some disrepair -- I'd try re-running with ob1 and see what happens. I have not seen such errors when running PMB before, but I can try running it again to see if we've broken something recently. > Is there any thing that I am missing while building btl mvapi? Also, > is anyone built for mvapi and tested this OMPI stack. Please let me > know. Galen Shipman and Tim Woodall are doing all the IB work. -- {+} Jeff Squyres {+} The Open MPI Project {+} http://www.open-mpi.org/ _
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Thanks Brian. One thing is - If we want to use 3rd generation p2p communication, we can either export OMPI_MCA_pml=ob1 or we can add "pml=ob1" in the file openmpi-mca-params.conf. I have done both and ran Pallas bench mark. Is anyone got a chance to run Pallas with the recent code drop? I am using r6760M code drop. Pallas can be run with in the node which has got 2 processors. -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:12 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:04 AM, Jeff Squyres wrote: > On Aug 9, 2005, at 7:24 AM, Sridhar Chirravuri wrote: > >> I have disabled building OpenIB and to do so I have touched >> .ompi_ignore. This should not be a problem for MVAPI. > > If the Open IB headers / libraries are not located in compiler-known > locations, then you shouldn't need to .ompi_ignore the tree (i.e., > configure won't find the Open IB headers / libraries, and will > therefore automatically skip those components). > > Again, it is our intention that users will neither know about nor have > to touch files in the distribution -- they only need use appropriate > options to "configure" and then "make". > > I'm not sure if we have explicit options to disable a component in > configure -- Brian, can you comment here? But of course - we can cook your breakfast as well ;). You can explicitly disable a component using the --enable-mca-no- build=NAME. I know the syntax is a bit wonky, but Autoconf doesn't make this easy :(. Since you want to disable Open IB, you might do something like: ./configure [rest of options] --enable-mca-no-build=bml-openib Does something bad happen if you try to build with the Open IB component enabled? If so, could you let us know what happens? Galen and I are trying to make sure we have all the BTL configure scripts tightened up before we release to the world. If it's just something about your test environment, no biggie - just want to make sure we don't have a real problem. Since you only are interested in the MVAPI port and therefore not at all interested in the TEG PML (the older generation of communication interface), you can build with: ./configure [rest of options] --enable-mca-no-build=bml-openib,pml- teg,pml-uniq which will cause the older generation PMLs to not be built. I think this is probably going to make life easiest on you. > >> Is there any thing that I am missing while building btl mvapi? Also, >> is anyone built for mvapi and tested this OMPI stack. Please let me >> know. > > Galen Shipman and Tim Woodall are doing all the IB work. I think that most of your problems will go away once you start using the OB1 PML and therefore actually start using the MVAPI interface. I know that both Galen and I have been running a slightly modified copy of the Intel test suite against the MVAPI driver on a fairly regular basis. There are one or two minor datatype issues that still need to be worked out, but we should pass all the point to point tests. Hope this helps, Brian -- Brian Barrett Open MPI developer http://www.open-mpi.org/ ___ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Continuation to previous mail. I ran pallas by forcing 3rd gen but Pallas just hung and could see the messages Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) Thanks -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Sridhar Chirravuri Sent: Tuesday, August 09, 2005 7:28 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI Thanks Brian. One thing is - If we want to use 3rd generation p2p communication, we can either export OMPI_MCA_pml=ob1 or we can add "pml=ob1" in the file openmpi-mca-params.conf. I have done both and ran Pallas bench mark. Is anyone got a chance to run Pallas with the recent code drop? I am using r6760M code drop. Pallas can be run with in the node which has got 2 processors. -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:12 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:04 AM, Jeff Squyres wrote: > On Aug 9, 2005, at 7:24 AM, Sridhar Chirravuri wrote: > >> I have disabled building OpenIB and to do so I have touched >> .ompi_ignore. This should not be a problem for MVAPI. > > If the Open IB headers / libraries are not located in compiler-known > locations, then you shouldn't need to .ompi_ignore the tree (i.e., > configure won't find the Open IB headers / libraries, and will > therefore automatically skip those components). > > Again, it is our intention that users will neither know about nor have > to touch files in the distribution -- they only need use appropriate > options to "configure" and then "make". > > I'm not sure if we have explicit options to disable a component in > configure -- Brian, can you comment here? But of course - we can cook your breakfast as well ;). You can explicitly disable a component using the --enable-mca-no- build=NAME. I know the syntax is a bit wonky, but Autoconf doesn't make this easy :(. Since you want to disable Open IB, you might do something like: ./configure [rest of options] --enable-mca-no-build=bml-openib Does something bad happen if you try to build with the Open IB component enabled? If so, could you let us know what happens? Galen and I are trying to make sure we have all the BTL configure scripts tightened up before we release to the world. If it's just something about your test environment, no biggie - just want to make sure we don't have a real problem. Since you only are interested in the MVAPI port and therefore not at all interested in the TEG PML (the older generation of communication interface), you can build with: ./configure [rest of options] --enable-mca-no-build=bml-openib,pml- teg,pml-uniq which will cause the older generation PMLs to not be built. I think this is probably going to make life easiest on you. > >> Is there any thing that I am missing while building btl mvapi? Also, >> is anyone built for mvapi and tested this OMPI stack. Please let me >> know. > > Galen Shipman and Tim Woodall are doing all the IB work. I think that most of your problems will go away once you start using the OB1 PML and therefore actually start using the MVAPI interface. I know that both Galen and I have been running a slightly modified copy of the Intel test suite against the MVAPI driver on a fairly regular basis. There are one or two minor datatype issues that still need to be worked out, but we should pass all the point to point tests. Hope this helps, Brian -- Brian Barrett Open MPI developer http://www.open-mpi.org/ ___ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel ___ devel mailing list de...@open-mpi.org http://www.open-mpi.org/mailman/listinfo.cgi/devel
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
On Aug 9, 2005, at 8:48 AM, Sridhar Chirravuri wrote: Does r6774 has lot of changes that are related to 3rd generation point-to-point? I am trying to run some benchmark tests (ex: pallas) with Open MPI stack and just want to compare the performance figures with MVAPICH 095 and MVAPICH 092. In order to use 3rd generation p2p communication, I have added the following line in the /openmpi/etc/openmpi-mca-params.conf pml=ob1 I also exported (as double check) OMPI_MCA_pml=ob1. Then, I have tried running on the same machine. My machine has got 2 processors. Mpirun -np 2 ./PMB-MPI1 I still see the following lines Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) Request for 0 bytes (coll_basic_reduce_scatter.c, 79) Request for 0 bytes (coll_basic_reduce.c, 193) These errors are coming from the collective routines, not the PML/BTL layers. It looks like the reduction codes are trying to call malloc (0), which doesn't work so well. We'll take a look as soon as we can. In the mean time, can you just not run the tests that call the reduction collectives? Brian -- Brian Barrett Open MPI developer http://www.open-mpi.org/
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
On Aug 9, 2005, at 9:48 AM, Sridhar Chirravuri wrote: Does r6774 has lot of changes that are related to 3rd generation point-to-point? I am trying to run some benchmark tests (ex: pallas) with Open MPI stack and just want to compare the performance figures with MVAPICH 095 and MVAPICH 092. Looking at the log, it looks like Tim fixed a few things to do with probe/iprobe, and Galen added the stuff to configure to Topspin's mvapi headers / libraries. That's probably most of what would interest you (you can see the full log via the "svn log" command). In order to use 3rd generation p2p communication, I have added the following line in the /openmpi/etc/openmpi-mca-params.conf pml=ob1 I also exported (as double check) OMPI_MCA_pml=ob1. Note that you really only have to do one of those -- the file method is a good/easy way to set it an forget about it. The environment variable way is also fine (and 100% equivalent to the file method), but needs to be set in every shell where you invoke mpirun. -- {+} Jeff Squyres {+} The Open MPI Project {+} http://www.open-mpi.org/
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
I have run sendrecv function in Pallas but it failed to run it. Here is the output [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv Could not join a running, existing universe Establishing a new one named: default-universe-5097 [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786#--- #PALLAS MPI Benchmark Suite V2.2, MPI-1 part #--- # Date : Tue Aug 9 07:11:25 2005 # Machine: x86_64# System : Linux # Release: 2.6.9-5.ELsmp # Version: #1 SMP Wed Jan 5 19:29:47 EST 2005 # # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions: MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Sendrecv [0,1,1][btl_mvapi_endpoint.c:368:mca_btl_mvapi_endpoint_reply_start_conn ect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,1][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 785 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTR..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7080096 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7240736[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTS..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7241888 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7241888[0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7241888 [0,1,1][btl_mvapi_component.c:523:mca_btl_mvapi_component_progress] Got a recv completion Thanks -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:35 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:48 AM, Sridhar Chirravuri wrote: > Does r6774 has lot of changes that are related to 3rd generation > point-to-point? I am trying to run some benchmark tests (ex: > pallas) with Open MPI stack and just want to compare the > performance figures with MVAPICH 095 and MVAPICH 092. > > In order to use 3rd generation p2p communication, I have added the > following line in the /openmpi/etc/openmpi-mca-params.conf > > pml=ob1 > > I also exported (as double check) OMPI_MCA_pml=ob1. > > Then, I have tried running on the same machine. My machine has got > 2 processors. > > Mpirun -np 2 ./PMB-MPI1 > > I still see the following lines > > Request for 0 bytes (coll_basic_reduce_scatter.c, 79) > Request for 0 bytes (coll_basic_reduce.c, 193) > Request for 0 bytes (coll_basic_reduce_scatter.c, 79) > Request for 0 bytes (coll_basic_reduce.c, 193) These errors are coming from the collective routines, n
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
The same kind of output while running Pallas "pingpong" test. -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Sridhar Chirravuri Sent: Tuesday, August 09, 2005 7:44 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI I have run sendrecv function in Pallas but it failed to run it. Here is the output [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv Could not join a running, existing universe Establishing a new one named: default-universe-5097 [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786#--- #PALLAS MPI Benchmark Suite V2.2, MPI-1 part #--- # Date : Tue Aug 9 07:11:25 2005 # Machine: x86_64# System : Linux # Release: 2.6.9-5.ELsmp # Version: #1 SMP Wed Jan 5 19:29:47 EST 2005 # # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions: MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Sendrecv [0,1,1][btl_mvapi_endpoint.c:368:mca_btl_mvapi_endpoint_reply_start_conn ect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,1][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 785 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTR..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7080096 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7240736[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTS..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7241888 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7241888[0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7241888 [0,1,1][btl_mvapi_component.c:523:mca_btl_mvapi_component_progress] Got a recv completion Thanks -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:35 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:48 AM, Sridhar Chirravuri wrote: > Does r6774 has lot of changes that are related to 3rd generation > point-to-point? I am trying to run some benchmark tests (ex: > pallas) with Open MPI stack and just want to compare the > performance figures with MVAPICH 095 and MVAPICH 092. > > In order to use 3rd generation p2p communication, I have added the > following line in the /openmpi/etc/openmpi-mca-params.conf > > pml=ob1 > > I also exported (as double check) OMPI_MCA_pml=ob1. > > Then, I have tried running on the same machine. My machine has got > 2 processor
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Hello Sridhar, We'll see if we can reproduce this today. Thanks, Tim Sridhar Chirravuri wrote: The same kind of output while running Pallas "pingpong" test. -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Sridhar Chirravuri Sent: Tuesday, August 09, 2005 7:44 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI I have run sendrecv function in Pallas but it failed to run it. Here is the output [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv Could not join a running, existing universe Establishing a new one named: default-universe-5097 [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786 [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786#--- #PALLAS MPI Benchmark Suite V2.2, MPI-1 part #--- # Date : Tue Aug 9 07:11:25 2005 # Machine: x86_64# System : Linux # Release: 2.6.9-5.ELsmp # Version: #1 SMP Wed Jan 5 19:29:47 EST 2005 # # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions: MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Sendrecv [0,1,1][btl_mvapi_endpoint.c:368:mca_btl_mvapi_endpoint_reply_start_conn ect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,1][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 785 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTR..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7080096 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7240736[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTS..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7241888 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7241888[0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7241888 [0,1,1][btl_mvapi_component.c:523:mca_btl_mvapi_component_progress] Got a recv completion Thanks -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:35 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:48 AM, Sridhar Chirravuri wrote: Does r6774 has lot of changes that are related to 3rd generation point-to-point? I am trying to run some benchmark tests (ex: pallas) with Open MPI stack and just want to compare the performance figures with MVAPICH 095 and MVAPICH 092. In order to use 3rd generation p2p communication, I have added the following line in the /openmpi/etc/openmpi-mca-params.conf pml=ob1 I also exported (as double check) OMPI_MCA_pml=ob1.
Re: [O-MPI devel] Memory registration question.
Gleb. The memory pool does not support overlapping registrations. The registrations are stored in a balanced tree, so which ever of the two it encounters first it will return. Tim Prins > Hello, > > I am trying to understand how memory registration works in openMPI and > I have a question. Does mca_mpool_base_(insert|insert) interface supports > overlapping registrations? If one module register memory from 0 to 100 > and another from 50 to 150 what mca_mpool_base_find(80) will return to > ob1 module? > > -- > Gleb. > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel >
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Okay Tim. Thanks for the mail. Tomorrow, I will try with configuration options that Brian had mentioned and will let you guys know the status. Thanks all for your support. -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Tim S. Woodall Sent: Tuesday, August 09, 2005 7:50 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI Hello Sridhar, We'll see if we can reproduce this today. Thanks, Tim Sridhar Chirravuri wrote: > The same kind of output while running Pallas "pingpong" test. > > -Sridhar > > -Original Message- > From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On > Behalf Of Sridhar Chirravuri > Sent: Tuesday, August 09, 2005 7:44 PM > To: Open MPI Developers > Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI > > > I have run sendrecv function in Pallas but it failed to run it. Here is > the output > > [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv > Could not join a running, existing universe > Establishing a new one named: default-universe-5097 > [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > > [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection > to endpoint closed ... connecting ... > [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] > Initialized High Priority QP num = 263177, Low Priority QP num = 263178, > LID = 785 > > [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req > ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, > LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] > Connection to endpoint closed ... connecting ... > [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] > Initialized High Priority QP num = 263179, Low Priority QP num = 263180, > LID = 786 > > [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req > ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, > LID = 786#--- > #PALLAS MPI Benchmark Suite V2.2, MPI-1 part > #--- > # Date : Tue Aug 9 07:11:25 2005 > # Machine: x86_64# System : Linux > # Release: 2.6.9-5.ELsmp > # Version: #1 SMP Wed Jan 5 19:29:47 EST 2005 > > # > # Minimum message length in bytes: 0 > # Maximum message length in bytes: 4194304 > # > # MPI_Datatype : MPI_BYTE > # MPI_Datatype for reductions: MPI_FLOAT > # MPI_Op : MPI_SUM > # > # > > # List of Benchmarks to run: > > # Sendrecv > [0,1,1][btl_mvapi_endpoint.c:368:mca_btl_mvapi_endpoint_reply_start_conn > ect] Initialized High Priority QP num = 263177, Low Priority QP num = > 263178, LID = 777 > > [0,1,1][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] > Received High Priority QP num = 263177, Low Priority QP num 263178, LID > = 785 > > [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] > Modified to init..Qp > 7080096[0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_q > uery] Modified to RTR..Qp > 7080096[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q > uery] Modified to RTS..Qp 7080096 > > [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] > Modified to init..Qp 7240736 > [0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] > Modified to RTR..Qp > 7240736[0,1,1][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q > uery] Modified to RTS..Qp 7240736 > [0,1,1][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req > ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, > LID = 777 > [0,1,0][btl_mvapi_endpoint.c:266:mca_btl_mvapi_endpoint_set_remote_info] > Received High Priority QP num = 263177, Low Priority QP num 263178, LID > = 777 > [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] > Modified to init..Qp 7081440 > [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] > Modified to RTR..Qp 7081440 > [0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_query] > Modified to RTS..Qp 7081440 > [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] > Modified to init..Qp 7241888 > [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] > Modified to RTR..Qp > 7241888[0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_q > uery] Modified to RTS..Qp 7241888 > [0,1,1][btl_mvapi_component.c:523:mca_btl_mvapi_component_progress] Got > a recv completion > > > Thanks > -Sridhar > > > > > -Original Message- > From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.
Re: [O-MPI devel] Memory registration question.
On Tue, Aug 09, 2005 at 08:20:38AM -0600, Timothy B. Prins wrote: > Gleb. > > The memory pool does not support overlapping registrations. The > registrations are stored in a balanced tree, so which ever of the two it > encounters first it will return. This was my impression. Is this inefficient? If wrong module is returned memory will have to be registered one more time. Are you planning to support overlapping registrations? I think it could be done with small changes to ob1. > > > Tim Prins > > > > Hello, > > > > I am trying to understand how memory registration works in openMPI and > > I have a question. Does mca_mpool_base_(insert|insert) interface supports > > overlapping registrations? If one module register memory from 0 to 100 > > and another from 50 to 150 what mca_mpool_base_find(80) will return to > > ob1 module? > > > > -- > > Gleb. > > ___ > > devel mailing list > > de...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/devel > > > > ___ > devel mailing list > de...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/devel -- Gleb.
Re: [O-MPI devel] Memory registration question.
Gleb, We just talked about this problem and we decided that we needed to support overlapping registrations. Our idea is to add another funtion to the mpool that would return a list of registrations that correspond to a base address. If you have anyother ideas of how to do it please let us know. Thanks, Tim Prins > On Tue, Aug 09, 2005 at 08:20:38AM -0600, Timothy B. Prins wrote: >> Gleb. >> >> The memory pool does not support overlapping registrations. The >> registrations are stored in a balanced tree, so which ever of the two it >> encounters first it will return. > This was my impression. Is this inefficient? If wrong module is returned > memory > will have to be registered one more time. Are you planning to support > overlapping registrations? I think it could be done with small changes > to ob1. > > >> >> >> Tim Prins >> >> >> > Hello, >> > >> > I am trying to understand how memory registration works in openMPI >> and >> > I have a question. Does mca_mpool_base_(insert|insert) interface >> supports >> > overlapping registrations? If one module register memory from 0 to 100 >> > and another from 50 to 150 what mca_mpool_base_find(80) will return to >> > ob1 module? >> > >> > -- >> >Gleb. >> > ___ >> > devel mailing list >> > de...@open-mpi.org >> > http://www.open-mpi.org/mailman/listinfo.cgi/devel >> > >> >> ___ >> devel mailing list >> de...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/devel > > -- > Gleb. >
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Hi On Aug 9, 2005, at 8:15 AM, Sridhar Chirravuri wrote: The same kind of output while running Pallas "pingpong" test. -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Sridhar Chirravuri Sent: Tuesday, August 09, 2005 7:44 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI I have run sendrecv function in Pallas but it failed to run it. Here is the output [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv Could not join a running, existing universe Establishing a new one named: default-universe-5097 [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785 [0,1,0][btl_mvapi_endpoint.c:190: mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection to endpoint closed ... connecting ... [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] Initialized High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786 [0,1,0][btl_mvapi_endpoint.c:190: mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, LID = 786#--- #PALLAS MPI Benchmark Suite V2.2, MPI-1 part #--- # Date : Tue Aug 9 07:11:25 2005 # Machine: x86_64# System : Linux # Release: 2.6.9-5.ELsmp # Version: #1 SMP Wed Jan 5 19:29:47 EST 2005 # # Minimum message length in bytes: 0 # Maximum message length in bytes: 4194304 # # MPI_Datatype : MPI_BYTE # MPI_Datatype for reductions: MPI_FLOAT # MPI_Op : MPI_SUM # # # List of Benchmarks to run: # Sendrecv [0,1,1][btl_mvapi_endpoint.c:368: mca_btl_mvapi_endpoint_reply_start_conn ect] Initialized High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,1][btl_mvapi_endpoint.c:266: mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 785 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:791: mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTR..Qp 7080096[0,1,1][btl_mvapi_endpoint.c:814: mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7080096 [0,1,1][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7240736[0,1,1][btl_mvapi_endpoint.c:814: mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7240736 [0,1,1][btl_mvapi_endpoint.c:190: mca_btl_mvapi_endpoint_send_connect_req ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:266: mca_btl_mvapi_endpoint_set_remote_info] Received High Priority QP num = 263177, Low Priority QP num 263178, LID = 777 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:814:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTS..Qp 7081440 [0,1,0][btl_mvapi_endpoint.c:756:mca_btl_mvapi_endpoint_qp_init_query] Modified to init..Qp 7241888 [0,1,0][btl_mvapi_endpoint.c:791:mca_btl_mvapi_endpoint_qp_init_query] Modified to RTR..Qp 7241888[0,1,0][btl_mvapi_endpoint.c:814: mca_btl_mvapi_endpoint_qp_init_q uery] Modified to RTS..Qp 7241888 [0,1,1][btl_mvapi_component.c:523:mca_btl_mvapi_component_progress] Got a recv completion Thanks -Sridhar -Original Message- From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On Behalf Of Brian Barrett Sent: Tuesday, August 09, 2005 7:35 PM To: Open MPI Developers Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI On Aug 9, 2005, at 8:48 AM, Sridhar Chirravuri wrote: Does r6774 has lot of changes that are related to 3rd generation point-to-point? I am trying to run some benchmark tests (ex: pallas) with Open MPI stack and just want to compare the performance figures with MVAPICH 095 and MVAPICH 092. In order to use 3rd generation p2p communication, I have added the following line in the /openmpi/etc/openmpi-mca-params.conf pml=ob1 I also exported (as double check) OMPI_MCA_pml=ob1. Then, I
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
Hi Sridhar, I have committed changes that allow you to set the debg verbosity, OMPI_MCA_btl_base_debug 0 - no debug output 1 - standard debug output 2 - very verbose debug output Also we have run the Pallas tests and are not able to reproduce your failures. We do see a warning in the Reduce test but it does not hang and runs to completion. Attached is a simple ping pong program, try running this and let us know the results. Thanks, Galen /* * MPI ping program * * Patterned after the example in the Quadrics documentation */ #define MPI_ALLOC_MEM 0 #include #include #include #include #include #include #include "mpi.h" static int str2size(char *str) { int size; char mod[32]; switch (sscanf(str, "%d%1[mMkK]", &size, mod)) { case 1: return (size); case 2: switch (*mod) { case 'm': case 'M': return (size << 20); case 'k': case 'K': return (size << 10); default: return (size); } default: return (-1); } } static void usage(void) { fprintf(stderr, "Usage: mpi-ping [flags] [] []\n" " mpi-ping -h\n"); exit(EXIT_FAILURE); } static void help(void) { printf ("Usage: mpi-ping [flags] [] []\n" "\n" " Flags may be any of\n" " -Buse blocking send/recv\n" " -Ccheck data\n" " -Ooverlapping pings\n" " -Wperform warm-up phase\n" " -r number repetitions to time\n" " -A use MPI_Alloc_mem to register memory\n" " -hprint this info\n" "\n" " Numbers may be postfixed with 'k' or 'm'\n\n"); exit(EXIT_SUCCESS); } int main(int argc, char *argv[]) { MPI_Status status; MPI_Request recv_request; MPI_Request send_request; unsigned char *rbuf; unsigned char *tbuf; int c; int i; int bytes; int nproc; int peer; int proc; int r; int tag = 0x666; /* * default options / arguments */ int reps = 1; int blocking = 0; int check = 0; int overlap = 0; int warmup = 0; int inc_bytes = 0; int max_bytes = 0; int min_bytes = 0; int alloc_mem = 0; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &proc); MPI_Comm_size(MPI_COMM_WORLD, &nproc); while ((c = getopt(argc, argv, "BCOWAr:h")) != -1) { switch (c) { case 'B': blocking = 1; break; case 'C': check = 1; break; case 'O': overlap = 1; break; case 'W': warmup = 1; break; case 'A': alloc_mem=1; break; case 'r': if ((reps = str2size(optarg)) <= 0) { usage(); } break; case 'h': help(); default: usage(); } } if (optind == argc) { min_bytes = 0; } else if ((min_bytes = str2size(argv[optind++])) < 0) { usage(); } if (optind == argc) { max_bytes = min_bytes; } else if ((max_bytes = str2size(argv[optind++])) < min_bytes) { usage(); } if (optind == argc) { inc_bytes = 0; } else if ((inc_bytes = str2size(argv[optind++])) < 0) { usage(); } if (nproc == 1) { exit(EXIT_SUCCESS); } #if MPI_ALLOC_MEM if(alloc_mem) { MPI_Alloc_mem(max_bytes ? max_bytes: 8, MPI_INFO_NULL, &rbuf); MPI_Alloc_mem(max_bytes ? max_bytes: 8, MPI_INFO_NULL, &tbuf); } else { #endif if ((rbuf = (unsigned char *) malloc(max_bytes ? max_bytes : 8)) == NULL) { perror("malloc"); exit(EXIT_FAILURE); } if ((tbuf = (unsigned char *) malloc(max_bytes ? max_bytes : 8)) == NULL) { perror("malloc"); exit(EXIT_FAILURE); } #if MPI_ALLOC_MEM } #endif if (check) { for (i = 0; i < max_bytes; i++) { tbuf[i] = i & 255; rbuf[i] = 0; } } if (proc == 0) { if (overlap) { printf("mpi-ping: overlapping ping-pong\n"); } else if (blocking) { printf("mpi-ping: ping-pong (using blocking send/recv)\n"); } else { printf("mpi-ping: ping-pong\n"); } if (check) { printf("data checking enabled\n"); } printf("nprocs=%d, reps=%d, min bytes=%d, max bytes=%d inc bytes=%d\n", nproc, reps, min_bytes, max_bytes, inc_bytes); fflush(stdout); } MPI_Barrier(MPI_COMM_WORLD); peer = proc ^ 1; if ((peer < nproc) && (peer & 1)) { printf("%d pings %d\n", proc, peer); fflush(stdout
[O-MPI devel] memory manager hooks
Hi all - We finally broke down today on the telecon and decided that there's just no way around playing memory manager tricks to get good IB and Myrinet performance. I added two things to opal today - a dispatch system so that different functions could register to receive callbacks whenever the process is about to "release' memory, containing start and length data and the ptmalloc2 memory manager. Note that "release" is very vague - this could mean free() has been called by the user but the process is going to hold on to the memory or it could mean that the process is giving the memory back to the operating system - it all depends on what the back end is capable of. The ptmalloc2 memory manager is currently the only system we have for intercepting release of memory, and it must be enabled explicitly at configure time with --with-memory-manager=ptmalloc2. There is a really simple example of using the system in topdir/test/ memory/opal_memory_basic.c. I plan on adding a couple more backends to experiment with various systems and their advantages / disadvantages: - ld preload a shared object to intercept sbrk / munmap - (possibly) a system to use the GLIBC hooks, although book keeping might make this impractical - Something with Darwin. Need to get back in touch with some Apple engineers on how to do this in a way that sucks less. One word of caution - if you register a handler from a component, you *must* unregister the handler before your component is closed. Otherwise, the process is going to segfault when it tries to call the handler after your component is unloaded. Brian -- Brian Barrett Open MPI developer http://www.open-mpi.org/
Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI
I run all the ex-Pallas test and the same error happens. We try to malloc 0 bytes and we hang somewhere. Let me explain what I found. First of all, most of the tests seems to work perfectly (at least with the PTL/BTL I was able to run: sm, tcp, mx). The deadlock as well as the memory allocation problem happens in the reduce_scatter operation. Problem 1: allocating 0 bytes - it's not a datatype problem. The datatype return the correct extent, true_extent, lb. The problem is that we miss one case in the collective communications. How about the case when the user do a reduce_scatter with all the counts set to zero ? We check if they are greater than zero and it's the case. Then we add them together and as expected a sum of zero is zero. So in the coll_basic_reduce_scatter line 79 we will allocate zero bytes because the extent and the true_extent of the MPI_FLOAT datatype are equal and (count - 1) is -1 !!! There is a simple fix for this problem, if count == 0 then free_buffer should be set to NULL (as we don't send or receive anything in this buffer it will just work fine) at the PTL/PML level. - the same error can happens on the reduce function if the count is zero. I will protect this function too. Problem 2: hanging - somehow a strange optimization get inside the scatterv function. In the case where the sender has to send zero bytes it completly skip the send operation. But the receiver still expect to get a message. Anyway, this optimization is not correct, all messages have to be send. I know that it can (slightly) increase the time for the collective but it give us a simple way of checking the correctness of the global communication (as the PML handle the truncate case). Patch is on the way. Once these 2 problems corrected we pass all the Pallas MPI1 tests. I run the tests with the PML ob1, teg and uniq and the PTL/BTL sm, tcp, gm (PTL) and mx(PTL) with 2 and 8 processes. george. PS: the patches will be commited soon. > On Aug 9, 2005, at 1:53 PM, Galen Shipman wrote: > > Hi Sridhar, > > I have committed changes that allow you to set the debg verbosity, > > OMPI_MCA_btl_base_debug > 0 - no debug output > 1 - standard debug output > 2 - very verbose debug output > > Also we have run the Pallas tests and are not able to reproduce your > failures. We do see a warning in > the Reduce test but it does not hang and runs to completion. Attached is a > simple ping pong program, > try running this and let us know the results. > > Thanks, > > Galen > > > > > > On Aug 9, 2005, at 8:15 AM, Sridhar Chirravuri wrote: > > > The same kind of output while running Pallas "pingpong" test. > > -Sridhar > > -Original Message- > From: devel-boun...@open-mpi.org [mailto:devel-boun...@open-mpi.org] On > Behalf Of Sridhar Chirravuri > Sent: Tuesday, August 09, 2005 7:44 PM > To: Open MPI Developers > Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI > > > I have run sendrecv function in Pallas but it failed to run it. Here is > the output > > [root@micrompi-2 SRC_PMB]# mpirun -np 2 PMB-MPI1 sendrecv > Could not join a running, existing universe > Establishing a new one named: default-universe-5097 > [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > [0,1,1][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > > [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > [0,1,0][btl_mvapi.c:130:mca_btl_mvapi_del_procs] Stub > > [0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] Connection > to endpoint closed ... connecting ... > [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] > Initialized High Priority QP num = 263177, Low Priority QP num = 263178, > LID = 785 > > [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req > ] Sending High Priority QP num = 263177, Low Priority QP num = 263178, > LID = 785[0,1,0][btl_mvapi_endpoint.c:542:mca_btl_mvapi_endpoint_send] > Connection to endpoint closed ... connecting ... > [0,1,0][btl_mvapi_endpoint.c:318:mca_btl_mvapi_endpoint_start_connect] > Initialized High Priority QP num = 263179, Low Priority QP num = 263180, > LID = 786 > > [0,1,0][btl_mvapi_endpoint.c:190:mca_btl_mvapi_endpoint_send_connect_req > ] Sending High Priority QP num = 263179, Low Priority QP num = 263180, > LID = 786#--- > # PALLAS MPI Benchmark Suite V2.2, MPI-1 part > #--- > # Date : Tue Aug 9 07:11:25 2005 > # Machine : x86_64# System : Linux > # Release : 2.6.9-5.ELsmp > # Version : #1 SMP Wed Jan 5 19:29:47 EST 2005 > > # > # Minimum message length in bytes: 0 > # Maximum message length in bytes: 4194304 > # > # MPI_Datatype : MPI_BYTE > # MPI_Datatype for reductions : MPI_FLOAT > # MPI_Op : MPI_SUM > # > # > > # List of Benchmarks to run: > > # Sendrecv > [0,1,1][btl_mvapi_endpoint.c:368:mca_btl_mvapi_endp