Mahmood,
A well established alternative to Intel's IMB is the OSU micro benchmarks
from Ohio State University http://mvapich.cse.ohio-state.edu/benchmarks/
MTT can be used to automatically build and test Open MPI.
MTT itself only contains a few trivial sets, and use external test suites.
Hi,
Is there any small benchmark for performance measurements? I mean a test
which utilize the number of cpus given to the mp
i for comparison. I want to compare two kernel versions on one system only
and not across different platforms.
I know Intel MPI benchmark, but I would like to know if the
Richard and I iterated more off list:
Short version: the correct "exclude" form for Richard is:
--mca btl_tcp_if_exclude virbr0,lo
More detail: I totally forgot that while OMPI excludes loopback devices by
default, if you override the value of btl_tcp_if_exclude, if you still want
loopback
Tom might be correct, I checked my system. Using rpm -qa, I did not find Xen,
but found libvirt.
At 2012-09-25 21:38:23,"Tom Bryan (tombry)" wrote:
>On 9/25/12 9:10 AM, "Jeff Squyres (jsquyres)" wrote:
>
>>>problem, so i fixed it using "--mca btl_tcp_if_include bond0" because I
>>>know this is
Jeff,
It was a typo in my last post, I did use "--mca btl_tcp_if_exclude virbr0" and
it did not work.
At 2012-09-25 21:10:24,"Jeff Squyres" wrote:
>On Sep 25, 2012, at 2:56 PM, Richard wrote:
>
>> thanks a lot !
>> using "--mca btl_if_exclude virbr0" does not work, but you have pointed out
On 9/25/12 9:10 AM, "Jeff Squyres (jsquyres)" wrote:
>>problem, so i fixed it using "--mca btl_tcp_if_include bond0" because I
>>know this is the high speed network interface I should use on each node.
>
>Glad it works for you!
>
>If you're not using those interfaces (they might be related to Xen
On Sep 25, 2012, at 2:56 PM, Richard wrote:
> thanks a lot !
> using "--mca btl_if_exclude virbr0" does not work, but you have pointed out
> the
Ya, sorry -- see my second mail, it should be "btl_tcp_if_exclude".
> problem, so i fixed it using "--mca btl_tcp_if_include bond0" because I know
thanks a lot !
using "--mca btl_if_exclude virbr0" does not work, but you have pointed out the
problem, so i fixed it using "--mca btl_tcp_if_include bond0" because I know
this is the high
speed network interface I should use on each node.
At 2012-09-25 20:30:16,"Jeff Squyres" wrote:
>On Sep
On Sep 25, 2012, at 2:28 PM, Jeff Squyres wrote:
> mpirun --mca btl_if_exclude virbr0 ...
Gah; sorry, that should be:
mpirun --mca btl_tcp_if_exclude virbr0 ...
I forgot the "tcp" there in the middle.
--
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.ci
Ah, I see the problem. See this FAQ entry:
http://www.open-mpi.org/faq/?category=tcp#tcp-selection
You want to exclude the virbr0 interfaces on your nodes; they're local-only
interfaces (that's where the 192.168.122.x addresses are coming from) that,
IIRC, have something to do with virtual
I have setup a small cluster with 3 nodes, named A, B and C respectively.
I tested the ring_c.c program in the examples. For debugging purpose,
I have added some print statements as follows in the original ring_c.c
>> 60 printf("rank %d, message %d,start===\n", rank, message);
>>
11 matches
Mail list logo