I would also suspect there is some optimization occurring in the HP test case, 
either via compiler or tuning, as that much speed difference isn't something 
commonly observed.


On Oct 29, 2012, at 7:54 AM, Reuti <re...@staff.uni-marburg.de> wrote:

> Am 29.10.2012 um 14:49 schrieb Giuseppe P.:
> 
>> Thank you very much guys. Now a more serious issue:
>> 
>> I am using mpi with laamps (a molecular dynamics package) on a single Rack 
>> Dell server Poweredge R810
>> (4 eight-core processors, 128 Gb RAM memory).
>> I am now potentially interested into buying the Intel MPI 4.1 libraries, and 
>> I am trying them by exploiting the
>> 30 days trial. However, I am not experiencing any significant improved 
>> performance by using
>> the Intel MPI libraries with respect to the Open Mpi (compiled with the Itel 
>> compilers).
>> 
>> Here there is the (makefile) working configuration for the Intel MPI 4.1 
>> compilers:
>> CC =            /opt/intel/impi/4.1.0.024/intel64/bin/mpiicpc
>> CCFLAGS =       -O -DMPICH_IGNORE_CXX_SEEK -DMPICH_SKIP_MPICXX
>> 
>> And here there is the Open Mpi one:
>> CC =            /usr/local/bin/mpicc
>> CCFLAGS =       -O -mpicc
>> 
>> I also tried the flag -O3 but I detected no significant differences in 
>> performance.
>> Now, I would be considering the Intel Mpi libraries, provided this would 
>> bring to a
>> significant increase in performance with respect to Open Mpi.
> 
> Why - because of -O3 (which I would consider dangerous) or because it's from 
> Intel? Intel MPI is based on MPICH2, not Open MPI.
> 
> 
>> I have evidence that there is room to improve because laamps under the same 
>> conditions
>> and on an HP Z650 with two 6-core processors (the clock frequency is the 
>> same though,
>> and tests for comparison were done on parallel runs using 8 cores), improves 
>> of nearly the 70%
>> by using the proprietary HP MPI libraries.
> 
> NB: They made their way to Platform Computing and then to IBM.
> 
> -- Reuti
> 
> 
>> Kind regards
>> 
>> Giuseppe
>> 
>> 
>> 2012/10/27 Ralph Castain <r...@open-mpi.org>
>> The reason is that you aren't actually running Open MPI - those error 
>> messages are coming from MPICH. Check your path and ensure you put the OMPI 
>> install location first, or use the absolute path to the OMPI mpirun
>> 
>> On Oct 27, 2012, at 8:46 AM, Giuseppe P. <istruzi...@gmail.com> wrote:
>> 
>>> Hello!
>>> 
>>> I have built open mpi 1.6 with Intel compilers (2013 versions). Compilation 
>>> was smooth, however even when I try to execute
>>> the simple program hello.c:
>>> 
>>> mpirun -np 4 ./hello_c.x
>>> [mpie...@claudio.ukzn] HYDU_create_process (./utils/launch/launch.c:102): 
>>> execvp error on file 
>>> /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/pmi_proxy (No such file 
>>> or directory)
>>> [mpie...@claudio.ukzn] HYD_pmcd_pmiserv_proxy_init_cb 
>>> (./pm/pmiserv/pmiserv_cb.c:1177): assert (!closed) failed
>>> [mpie...@claudio.ukzn] HYDT_dmxu_poll_wait_for_event 
>>> (./tools/demux/demux_poll.c:77): callback returned error status
>>> [mpie...@claudio.ukzn] HYD_pmci_wait_for_completion 
>>> (./pm/pmiserv/pmiserv_pmci.c:358): error waiting for event
>>> [mpie...@claudio.ukzn] main (./ui/mpich/mpiexec.c:689): process manager 
>>> error waiting for completion
>>> 
>>> Before that, there was an additional error, since also the file mpivars.sh 
>>> was not present in /opt/intel/composer_xe_2013.0.079/mpirt/bin/intel64/.
>>> Even though I managed to create one and it worked:
>>> 
>>> #!/bin/bash
>>> 
>>> if [ -z "`echo $PATH | grep /usr/local/bin`" ]; then
>>> export PATH=/usr/local/bin:$PATH
>>> fi
>>> 
>>> if [ -z "`echo $LD_LIBRARY_PATH | grep /usr/local/lib`" ]; then
>>> if [ -n "$LD_LIBRARY_PATH" ]; then
>>> export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
>>> else
>>> export LD_LIBRARY_PATH=/usr/local/lib
>>> fi
>>> fi
>>> 
>>> I do not have any clue about how to generate the file pmi_proxy.
>>> 
>>> Thank you in advance for your help!
>>> 
>>> _______________________________________________
>>> 
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to