Not exactly. I have 16 core nodes. Even if I run all 9 ranks on the same node it fails (with --mca btl sm,self). I also tried running on different nodes (3 nodes, 3 ranks each on each node) with openib and tcp - the same effect. Also as I wrote in another message I could see this effect on vbox with CentOS 5.3 (1 cores on guest, 4 cores on host, no network). So possibly this is something OS specific? Will try on Ubuntu and share the results.
Regards, Andrew > -----Original Message----- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Peter Kjellstrom > Sent: Wednesday, May 25, 2011 9:03 PM > To: us...@open-mpi.org > Subject: Re: [OMPI users] MPI_Allgather with derived type crash > > Would 8 happen to be the number of cores you have per node so what > we're seeing is: single node OK, multi node FAIL? > > If so what kind of inter node network are you (trying to) use(ing)? > > /Peter