I apologise if this is a naive question, but I'm new to this world of 
beowulfs.

I'm using C++/mpi, to get a feel for communication costs I ran tests using 
mpptest and my own programs.

For 2 processor blocking calls, mpptest indicates a latency of about 30 
microseconds.

However when I measure communication times in my own program using a loop as 
follows....
   
MPI_Barrier(MPI_COMM_WORLD);
start = MPI_Wtime();
for (unsigned t=1; t<=5000; t++)
{
 if (my_rank==0)
 {
  MPI_Send(data, size, MPI_INT, 1, tag, MPI_COMM_WORLD);
 }
 else
 {
  MPI_Recv(data, size, MPI_INT, 0, tag, MPI_COMM_WORLD, &status);
 }
}
end = MPI_Wtime();

for size>=4, I get a latency of about 30 microseconds as expected, however for 
size<4, communication costs increase massively, and latency now appears to be 
1ms!

Firstly, I assume this isn't normal?

Secondly, can anyone suggest what's going on, or where I can go for more 
information.

Many thanks.

We're using mpich.

Processors are Intel(R) Xeon(TM) CPU 3.60GHz.

Interconnects are Dell PowerConnect 5324 24-port gigabit switches.


_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit 
http://www.beowulf.org/mailman/listinfo/beowulf

Reply via email to