I just did, at least the ping pong, the results are slightly worst and
presents the same drop at 64KSee attachment.
a comment: to run with the btl mx i need to use --mca btl mx,sm,self -mca
mtl ^mx or i get a mx_open_endpoint failure due to myrinet busy (I have
already increase the number of en
On May 4, 2009, at 10:54 AM, Ricardo Fernández-Perea wrote:
I finally have opportunity to run the imb-3.2 benchmark over myrinet
I am running in a cluster of 16 node Xservers connected with myrinet
15 of them are 8core ones and the last one is a 4 cores one. Having
a limit of 124 process
I´m sorry if I didn´t said it before
the test where run with commands like the following
/opt/openmpi/bin/mpirun --bynode --mca pml cm --mca mtl mx -np 124 -hostfile
hostfile IMB-MPI1 [testname] 1>IMB1-[testname].results 2>&1
Ricardo
On Mon, May 4, 2009 at 5:36 PM, Bogdan Costescu <
bogdan.coste
On Mon, 4 May 2009, Ricardo Fern�ndez-Perea wrote:
any idea where I should look for the cause.
Can you try adding to the mpirun/mpiexec command line '--mca mtl
mx --mca pml cm' to specify usage of the non-default MX MTL ? (sorry
if you already do, I haven't found it in your e-mail)
--
Bogd
I finally have opportunity to run the imb-3.2 benchmark over myrinet I am
running in a cluster of 16 node Xservers connected with myrinet 15 of them
are 8core ones and the last one is a 4 cores one. Having a limit of 124
process
I have run the test with the bynode option so from the 2 to the 16 pro
It is the F-2M but I think for inter-node communication should be
equivalents.
I have not run and MPI pingpong benchmark yet.
The truth is I have a 10 days travel coming next week and I thought I can
take some optimization "light reading" with me.
so I know what I must look for when I came bac
On Mar 20, 2009, at 11:33 AM, Ricardo Fernández-Perea wrote:
This are the results initially
Running 1000 iterations.
Length Latency(us)Bandwidth(MB/s)
0 2.738 0.000
1 2.718 0.368
2 2.707 0.739
10485764392.217
On Fri, Mar 20, 2009 at 2:21 PM, Scott Atchley wrote:
> On Mar 20, 2009, at 5:59 AM, Ricardo Fernández-Perea wrote:
>
> Hello,
>>
>> I am running DL_POLY in various Xserver 8 processor with a myrinet
>> network.using mx-1.2.7
>>
>> While I keep in the same node the process scales reasonably well
On Mar 20, 2009, at 5:59 AM, Ricardo Fernández-Perea wrote:
Hello,
I am running DL_POLY in various Xserver 8 processor with a myrinet
network.using mx-1.2.7
While I keep in the same node the process scales reasonably well but
in the moment I hit the network ...
I will like to try to max
Hello,
I am running DL_POLY in various Xserver 8 processor with a myrinet
network.using mx-1.2.7
While I keep in the same node the process scales reasonably well but in the
moment I hit the network ...
I will like to try to maximize the mx network before trying to touch the
program code.
Is the
10 matches
Mail list logo