Hi Troy,
Tim and I would like to discuss this with you as well. One thing I
would ask, are you using the btl_mvapi_leave_pinned=1 option?
otherwise it is not a apples to apples comparison.
- Galen
On Aug 24, 2005, at 8:21 PM, Troy Benjegerdes wrote:
I have some Netpipe graphs of OpenMPI a
Hello Troy,
Can you forward the graphs? From the error output - doesn't
look like it's actually using IB - may be using TCP instead.
I won't be back in the office until friday - but could give you a call
then if you'd like.
Regards,
Tim
> I have some Netpipe graphs of OpenMPI and Mviapich on
On Tue, Aug 16, 2005 at 12:25:32PM -0400, Jeff Squyres wrote:
> Processor affinity is now implemented. You must ask for it via the MCA
> param "mpi_paffinity_alone". If this parameter is set to a nonzero
> value, OMPI will assume that its job is alone on the nodes that it is
> running on, and,
I have some Netpipe graphs of OpenMPI and Mviapich on OpenIB gen2 on
Opteron systems, one with PCI-X IB cards, and the other with PCI-Express
DDR IB cards.
I'd like to chat with someone who fill me in a bit on what's going on
with performance, and how the BTL for IB is implemented. One thing I'd
l
Interesting news...
Jim Barker installed Open MPI on one of our visualization teams'
InfiniBand clusters. They successfully built ParaView and ran it to
drive visualization on 3x3 "power wall" tiled display. ParaView has
a history of breaking MPI's so I'm very happy that this went so
sm
Yes, this happened a week or three ago.
Two ways to fix:
1. cd ompi/mca/ptl/sm/.deps
foreach file (`ls`)
echo >! $file
end
(assuming csh-flavored shell)
Then you can make with no problems.
2. cd ompi/mca/ptl
rm -rf sm
svn up
Then you'll need to re-autogen / configure / etc.
On Aug 24,
Is someone in the process of moving around the ptl/sm code? My build of
SVN revision 7005 fails there.
Otherwise, any tips on testing the OpenIB BTL? with any luck, I'll have
results for OpenMPI and mviapich later today..
--
---