You try Google Scholar yet? Always exhaust all nonhuman resources before requesting human assistance. The human brain is a terrible resource to waste when a computer can do the job.
Jeff Sent from my iPhone On Oct 3, 2013, at 10:18 AM, Yin Zhao <yin_z...@126.com> wrote: > Hi all, > > Does anybody have done experiments comparing the speed between mpich and > openmpi? > > Best regards, > Yin Zhao > >> 在 2013年10月3日,0:00,users-requ...@open-mpi.org 写道: >> >> Send users mailing list submissions to >> us...@open-mpi.org >> >> To subscribe or unsubscribe via the World Wide Web, visit >> http://www.open-mpi.org/mailman/listinfo.cgi/users >> or, via email, send a message with subject or body 'help' to >> users-requ...@open-mpi.org >> >> You can reach the person managing the list at >> users-ow...@open-mpi.org >> >> When replying, please edit your Subject line so it is more specific >> than "Re: Contents of users digest..." >> >> >> Today's Topics: >> >> 1. Re: Error compiling openmpi-1.9a1r29292 (Jeff Squyres (jsquyres)) >> 2. Re: non-functional mpif90 compiler (Gus Correa) >> 3. Re: non-functional mpif90 compiler (Jeff Squyres (jsquyres)) >> 4. CUDA-aware usage (Rolf vandeVaart) >> >> >> ---------------------------------------------------------------------- >> >> Message: 1 >> Date: Tue, 1 Oct 2013 18:38:15 +0000 >> From: "Jeff Squyres (jsquyres)" <jsquy...@cisco.com> >> To: Siegmar Gross <siegmar.gr...@informatik.hs-fulda.de>, Open MPI >> Users <us...@open-mpi.org> >> Subject: Re: [OMPI users] Error compiling openmpi-1.9a1r29292 >> Message-ID: >> <ef66bbeb19badc41ac8ccf5f684f07fc4f913...@xmb-rcd-x01.cisco.com> >> Content-Type: text/plain; charset="us-ascii" >> >> These should now be fixed. >> >>> On Sep 30, 2013, at 3:41 AM, Siegmar Gross >>> <siegmar.gr...@informatik.hs-fulda.de> wrote: >>> >>> Hi, >>> >>> today I tried to install openmpi-1.9a1r29292 on my platforms >>> (openSuSE 12.1 Linux x86_64, Solaris 10 x86_64, and Solaris 10 Sparc) >>> with Sun C 5.12 and gcc-4.8.0. I have the following error on all >>> platforms, when I compile a 32- or 64-bit version with Sun C. >>> >>> ... >>> PPFC mpi-f08-interfaces.lo >>> >>> module mpi_f08_interfaces >>> ^ >>> "../../../../../openmpi-1.9a1r29292/ompi/mpi/fortran/base/mpi-f08-interfaces.F90", >>> Line = 19, Column = 8: ERROR: The compiler has detected >>> errors in module "MPI_F08_INTERFACES". No module information >>> file will be created for this module. >>> >>> use :: mpi_f08_types, only : MPI_Datatype, MPI_Comm, MPI_Aint, >>> MPI_ADDRESS_KIND >>> ^ >>> "../../../../../openmpi-1.9a1r29292/ompi/mpi/fortran/base/mpi-f08-interfaces.F90", >>> Line = 4419, Column = 57: ERROR: "MPI_AINT" is not in module >>> "MPI_F08_TYPES". >>> >>> f90comp: 4622 SOURCE LINES >>> f90comp: 2 ERRORS, 0 WARNINGS, 0 OTHER MESSAGES, 0 ANSI >>> make[2]: *** [mpi-f08-interfaces.lo] Error 1 >>> make[2]: Leaving directory >>> `.../openmpi-1.9a1r29292-Linux.x86_64.64_cc/ompi/mpi/fortran/base' >>> make[1]: *** [all-recursive] Error 1 >>> make[1]: Leaving directory >>> `.../openmpi-1.9a1r29292-Linux.x86_64.64_cc/ompi' >>> make: *** [all-recursive] Error 1 >>> linpc1 openmpi-1.9a1r29292-Linux.x86_64.64_cc 122 >>> >>> >>> >>> >>> I have the following error on all platforms, when I compile a 32-bit >>> version with gcc-4.8.0. >>> >>> linpc1 openmpi-1.9a1r29292-Linux.x86_64.32_gcc 120 tail -150 >>> log.make.Linux.x86_64.32_gcc >>> Making all in mca/spml >>> make[2]: Entering directory >>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem/mca/spml' >>> CC base/spml_base_frame.lo >>> CC base/spml_base_select.lo >>> CC base/spml_base_request.lo >>> CC base/spml_base_atomicreq.lo >>> CC base/spml_base_getreq.lo >>> CC base/spml_base_putreq.lo >>> CC base/spml_base.lo >>> CCLD libmca_spml.la >>> make[2]: Leaving directory >>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem/mca/spml' >>> Making all in . >>> make[2]: Entering directory >>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem' >>> CC op/op.lo >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function >>> 'oshmem_op_max_freal16_func': >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: error: 'a' undeclared >>> (first use in this function) >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: note: each undeclared >>> identifier is reported only once for each function it appears in >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:26: error: expected expression >>> before ')' token >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:15: error: 'b' undeclared >>> (first use in this function) >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:26: error: expected expression >>> before ')' token >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:194:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(max, freal16, ompi_fortran_real16_t, __max_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function >>> 'oshmem_op_min_freal16_func': >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: error: 'a' undeclared >>> (first use in this function) >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:211:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(min, freal16, ompi_fortran_real16_t, __min_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:26: error: expected expression >>> before ')' token >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:211:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(min, freal16, ompi_fortran_real16_t, __min_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:15: error: 'b' undeclared >>> (first use in this function) >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:211:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(min, freal16, ompi_fortran_real16_t, __min_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:26: error: expected expression >>> before ')' token >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:211:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(min, freal16, ompi_fortran_real16_t, __min_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function >>> 'oshmem_op_sum_freal16_func': >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: error: 'a' undeclared >>> (first use in this function) >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:230:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(sum, freal16, ompi_fortran_real16_t, __sum_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:26: error: expected expression >>> before ')' token >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:230:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(sum, freal16, ompi_fortran_real16_t, __sum_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:15: error: 'b' undeclared >>> (first use in this function) >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:230:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(sum, freal16, ompi_fortran_real16_t, __sum_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:26: error: expected expression >>> before ')' token >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:230:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(sum, freal16, ompi_fortran_real16_t, __sum_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function >>> 'oshmem_op_prod_freal16_func': >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:15: error: 'a' undeclared >>> (first use in this function) >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:249:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(prod, freal16, ompi_fortran_real16_t, __prod_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:134:26: error: expected expression >>> before ')' token >>> type *a = (type *) in; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:249:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(prod, freal16, ompi_fortran_real16_t, __prod_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:15: error: 'b' undeclared >>> (first use in this function) >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:249:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(prod, freal16, ompi_fortran_real16_t, __prod_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:135:26: error: expected expression >>> before ')' token >>> type *b = (type *) out; \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:249:1: note: in expansion of macro >>> 'FUNC_OP_CREATE' >>> FUNC_OP_CREATE(prod, freal16, ompi_fortran_real16_t, __prod_op); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c: In function 'oshmem_op_init': >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:149:62: error: expected expression >>> before ')' token >>> oshmem_op_##name##_##type_name->dt_size = sizeof(type); >>> \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:302:5: note: in expansion of macro >>> 'OBJ_OP_CREATE' >>> OBJ_OP_CREATE(max, freal16, ompi_fortran_real16_t, OSHMEM_OP_MAX, >>> OSHMEM_OP_TYPE_FREAL16); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:149:62: error: expected expression >>> before ')' token >>> oshmem_op_##name##_##type_name->dt_size = sizeof(type); >>> \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:318:5: note: in expansion of macro >>> 'OBJ_OP_CREATE' >>> OBJ_OP_CREATE(min, freal16, ompi_fortran_real16_t, OSHMEM_OP_MIN, >>> OSHMEM_OP_TYPE_FREAL16); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:149:62: error: expected expression >>> before ')' token >>> oshmem_op_##name##_##type_name->dt_size = sizeof(type); >>> \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:336:5: note: in expansion of macro >>> 'OBJ_OP_CREATE' >>> OBJ_OP_CREATE(sum, freal16, ompi_fortran_real16_t, OSHMEM_OP_SUM, >>> OSHMEM_OP_TYPE_FREAL16); >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:149:62: error: expected expression >>> before ')' token >>> oshmem_op_##name##_##type_name->dt_size = sizeof(type); >>> \ >>> ^ >>> ../../openmpi-1.9a1r29292/oshmem/op/op.c:354:5: note: in expansion of macro >>> 'OBJ_OP_CREATE' >>> OBJ_OP_CREATE(prod, freal16, ompi_fortran_real16_t, OSHMEM_OP_PROD, >>> OSHMEM_OP_TYPE_FREAL16); >>> ^ >>> make[2]: *** [op/op.lo] Error 1 >>> make[2]: Leaving directory >>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem' >>> make[1]: *** [all-recursive] Error 1 >>> make[1]: Leaving directory >>> `/export2/src/openmpi-1.9/openmpi-1.9a1r29292-Linux.x86_64.32_gcc/oshmem' >>> make: *** [all-recursive] Error 1 >>> linpc1 openmpi-1.9a1r29292-Linux.x86_64.32_gcc 121 >>> >>> >>> I would be grateful, if somebody can fix the problems. Thank you very >>> much for your help in advance. >>> >>> >>> Kind regards >>> >>> Siegmar >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> >> -- >> Jeff Squyres >> jsquy...@cisco.com >> For corporate legal information go to: >> http://www.cisco.com/web/about/doing_business/legal/cri/ >> >> >> >> ------------------------------ >> >> Message: 2 >> Date: Tue, 01 Oct 2013 15:07:42 -0400 >> From: Gus Correa <g...@ldeo.columbia.edu> >> To: Open MPI Users <us...@open-mpi.org> >> Subject: Re: [OMPI users] non-functional mpif90 compiler >> Message-ID: <524b1d7e.7040...@ldeo.columbia.edu> >> Content-Type: text/plain; charset=ISO-8859-1; format=flowed >> >> Hi Damiano >> >> Glad to know you sorted out the problem with >> your environment variables. >> Home-brewed Open MPI tastes much better >> than those six-pack canned RPMs. >> Did OpenFOAM (was it OpenFoam?) eventually compile and run? >> >> Thanks Jeff for clarifying where is the hurdle (on Infiniband) >> when one tries to build Open MPI as group of static libraries. >> I tried to build OMPI static, and had failures in the past. >> I eventually gave up, but it was never clear why it would fail >> (although the error messages suggested that IB played a role). >> However, all my attempts were on machines with Infinband hardware. >> >> Good to know that one can build OMPI static on TCP/IP machines. >> And presumably also on (non-IB) standalone machines that will >> only use the shared memory features of OMPI). >> This may (or may not?) be useful when one intends to run MPI >> programs in a single box. >> (I wonder if this is what Damiano plans to do.) >> >> I know these questions may be somewhat off-topic, >> but from the standpoint of performance, >> once upon a time there was a popular wisdom that >> static linking produced faster executables >> than when linking to shared libraries >> (static producing larger executables, though). >> Is this "static is faster" view still valid? >> Does it apply to Open MPI in particular? >> Was it ever true? >> >> Many thanks, >> Gus Correa >> >>> On 10/01/2013 05:16 AM, Jeff Squyres (jsquyres) wrote: >>> If you are using a TCP network for MPI communications,static is fine. >>> >>> However, if you're trying to use an OS-bypass network such as >>> InfiniBand, RoCE, or iWARP, using static libraries can be >>> somewhat of a nightmare (because of how the OpenFabrics Verbs >>> support libraries work). >>> Specifically, I don't see the "openib" BTL plugin in your >>> ompi_info output, meaning that your Open MPI installation >>> is not capable of using InfiniBand/RoCE/iWARP. >>> >>> So just be aware that with your current builds, >>> you're basically TCP-only. >>> >>> >>>> On Oct 1, 2013, at 3:34 AM, Damiano Natali<damiano.nat...@gmail.com> >>>> wrote: >>>> >>>> Hi Gus, today I noticed there was another ompi directory in my path and >>>> maybe it gave some strange errors so I put the new ompi installation at >>>> the first place in PATH and LD_LIBRARY_PATH before the building and >>>> everything went nicely! >>>> >>>> So, as you and Jeff said, the problem was in having the rigth paths! >>>> >>>> Thank you very much for your support! >>>> >>>> Damiano >>>> >>>> p.s. Building static libraries didn't result in any problem so far! >>>> _______________________________________________ >>>> users mailing list >>>> us...@open-mpi.org >>>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> >> >> ------------------------------ >> >> Message: 3 >> Date: Tue, 1 Oct 2013 19:19:01 +0000 >> From: "Jeff Squyres (jsquyres)" <jsquy...@cisco.com> >> To: Open MPI Users <us...@open-mpi.org> >> Subject: Re: [OMPI users] non-functional mpif90 compiler >> Message-ID: >> <ef66bbeb19badc41ac8ccf5f684f07fc4f913...@xmb-rcd-x01.cisco.com> >> Content-Type: text/plain; charset="us-ascii" >> >>> On Oct 1, 2013, at 3:07 PM, Gus Correa <g...@ldeo.columbia.edu> wrote: >>> >>> Thanks Jeff for clarifying where is the hurdle (on Infiniband) >>> when one tries to build Open MPI as group of static libraries. >>> I tried to build OMPI static, and had failures in the past. >>> I eventually gave up, but it was never clear why it would fail >>> (although the error messages suggested that IB played a role). >>> However, all my attempts were on machines with Infinband hardware. >> >> If you care: >> >> http://www.open-mpi.org/faq/?category=mpi-apps#static-mpi-apps >> http://www.open-mpi.org/faq/?category=mpi-apps#static-ofa-mpi-apps >> >> -- >> Jeff Squyres >> jsquy...@cisco.com >> For corporate legal information go to: >> http://www.cisco.com/web/about/doing_business/legal/cri/ >> >> >> >> ------------------------------ >> >> Message: 4 >> Date: Tue, 1 Oct 2013 14:43:03 -0700 >> From: Rolf vandeVaart <rvandeva...@nvidia.com> >> To: "us...@open-mpi.org" <us...@open-mpi.org> >> Subject: [OMPI users] CUDA-aware usage >> Message-ID: >> <3af945ebf4d3ec41afe44eed9b0585f36007be8...@hqmail02.nvidia.com> >> Content-Type: text/plain; charset="us-ascii" >> >> We have done some work over the last year or two to add some CUDA-aware >> support into the Open MPI library. Details on building and using the >> feature are here. >> >> http://www.open-mpi.org/faq/?category=building#build-cuda >> http://www.open-mpi.org/faq/?category=running#mpi-cuda-support >> >> I am looking for any feedback on this feature from anyone who has taken >> advantage of it. You can send just send the response to me if you want and >> I will compile the feedback. >> >> Rolf >> >> ----------------------------------------------------------------------------------- >> This email message is for the sole use of the intended recipient(s) and may >> contain >> confidential information. Any unauthorized review, use, disclosure or >> distribution >> is prohibited. If you are not the intended recipient, please contact the >> sender by >> reply email and destroy all copies of the original message. >> ----------------------------------------------------------------------------------- >> -------------- next part -------------- >> HTML attachment scrubbed and removed >> >> ------------------------------ >> >> Subject: Digest Footer >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users >> >> ------------------------------ >> >> End of users Digest, Vol 2696, Issue 1 >> ************************************** > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users