I finally have opportunity to run the imb-3.2 benchmark over myrinet I am
running in a cluster of 16 node Xservers connected with myrinet 15 of them
are 8core ones and the last one is a 4 cores one. Having a limit of 124
process
I have run the test with the bynode option so from the 2 to the 16 process
test is always running 1 process by node.

the following test  pingpong, pingping, sendrecv, exchange presents a strong
drop in performance with the 64k packet size.

any idea where I should look for the cause.

Ricardo

On Fri, Mar 20, 2009 at 7:32 PM, Ricardo Fernández-Perea <
rfernandezpe...@gmail.com> wrote:

> It is the F-2M but I think for inter-node communication should be
> equivalents.
> I have not run and MPI pingpong benchmark yet.
>
> The truth is I have a 10 days travel coming next week and I thought I can
> take some optimization   "light reading" with me.
>
> so I know what I must look for  when I came back.
>
> Ricardo
>
>
> On Fri, Mar 20, 2009 at 5:10 PM, Scott Atchley <atch...@myri.com> wrote:
>
>> On Mar 20, 2009, at 11:33 AM, Ricardo Fernández-Perea wrote:
>>
>>  This are the results initially
>>> Running 1000 iterations.
>>>   Length   Latency(us)    Bandwidth(MB/s)
>>>        0       2.738          0.000
>>>        1       2.718          0.368
>>>        2       2.707          0.739
>>> <snip>
>>>  1048576    4392.217        238.735
>>>  2097152    8705.028        240.913
>>>  4194304   17359.166        241.619
>>>
>>> with  export MX_RCACHE=1
>>>
>>> Running 1000 iterations.
>>>   Length   Latency(us)    Bandwidth(MB/s)
>>>        0       2.731          0.000
>>>        1       2.705          0.370
>>>        2       2.719          0.736
>>> <snip>
>>>  1048576    4265.846        245.807
>>>  2097152    8491.122        246.982
>>>  4194304   16953.997        247.393
>>>
>>
>> Ricardo,
>>
>> I am assuming that these are PCI-X NICs. Given the latency and bandwidth,
>> are these "D" model NICs (see the top of the mx_info output)? If so, that
>> looks about as good as you can expect.
>>
>> Have you run Intel MPI Benchmark (IMB) or another MPI pingpong type
>> benchmark?
>>
>> Scott
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>

Attachment: IMB1-Exchange.results
Description: Binary data

Attachment: IMB1-PingPing.results
Description: Binary data

Attachment: IMB1-PingPong.results
Description: Binary data

Attachment: IMB1-Sendrecv.results
Description: Binary data

Reply via email to