On 07/12/13 12:55, Jeff Squyres (jsquyres) wrote:
FWIW: a long time ago (read: many Open MPI / knem versions ago), I did a few benchmarks with knem vs. no knem Open MPI installations. IIRC, I used the typical suspects like NetPIPE, the NPBs, etc. There was a modest performance improvement (I don't remember the numbers offhand); it was a smaller improvement than I had hoped for -- particularly in point-to-point message passing latency (e.g., via NetPIPE).
Jeff, I would turn the question the other way around: - are there any penalties when using KNEM?We have a couple of Really Big Nodes (128 cores) with non-huge memory bandwidth (because coupled of 4x standalone nodes with 4 sockets each). So cutting the bandwidth in halves on these nodes sound like Very Good Thing.
But otherwise we've 1500+ nodes with 2 sockets and 24GB memory only and we do not wanna to disturb the production on these nodes.... (and different MPI versions for different nodes are doofy).
Best Paul -- Dipl.-Inform. Paul Kapinos - High Performance Computing, RWTH Aachen University, Center for Computing and Communication Seffenter Weg 23, D 52074 Aachen (Germany) Tel: +49 241/80-24915
smime.p7s
Description: S/MIME Cryptographic Signature