Teige, Scott W wrote:
Greetings,

I have observed strange behavior with an application running with
OpenMPI 1.2.8, OFED 1.2. The application runs in two "modes", fast
and slow. The exectution time is either within one second of 108 sec.
or within one second of 67 sec. My cluster has 1 Gig ethernet and
DDR Infiniband so the byte transport layer is a prime suspect.

So, is there a way to determine (from my application code) which
BTL is really being used?
You may specify:
--mca btl openib,sm,self
And OpenMPI will use IB and shared memory for communication.
--mca btl tcp,sm,self
And OpenMPI will use TCP and shared memory for communication.

Thanks,
Pasha

Reply via email to