To keep this thread updated:
After I posted to the developers list, the community was able to guide
to a solution to the problem:
http://www.open-mpi.org/community/lists/devel/2010/04/7698.php
To sum up:
The extended communication times while using shared memory communication
of openmpi
On 4/6/2010 2:53 PM, Jeff Squyres wrote:
>
> Try NetPIPE -- it has both MPI communication benchmarking and TCP
> benchmarking. Then you can see if there is a noticable difference between
> TCP and MPI (there shouldn't be). There's also a "memcpy" mode in netpipe,
> but it's not quite the
> However, reading through your initial description on Tuesday, none of these
> fit: You want to actually measure the kernel time on TCP communication costs.
>
Since the problem occurs also on node only configuration and mca-option
btl = self,sm,tcp is used, I doubt it has to do with TCP
Hello,
On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
> Does anyone know a benchmark program, I could use for testing?
There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance
analysis tools (Scalasca, Vampir, Paraver, Opt, Jumpshot).
However, reading through
You could try MPI Test Tool (MTT, http://www.open-mpi.org/projects/mtt/).
2010/4/2 Oliver Geisler
> Does anyone know a benchmark program, I could use for testing?
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed
Does anyone know a benchmark program, I could use for testing?
--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
I have tried up to kernel 2.6.33.1 on both architectures (Core2 Duo and
I5) with the same results. The "slow" results are also appearing for
distribution of processes on the 4 cores one single node.
We use
btl = self,sm,tcp
in
/etc/openmpi/openmpi-mca-params.conf
Distributing several process to
I have a very dim recollection of some kernel TCP issues back in some older
kernel versions -- such issues affected all TCP communications, not just MPI.
Can you try a newer kernel, perchance?
On Mar 30, 2010, at 1:26 PM, wrote:
> Hello List,
>
> I
Hello List,
I hope you can help us out on that one, as we are trying to figure out
since weeks.
The situation: We have a program being capable of slitting to several
processes to be shared on nodes within a cluster network using openmpi.
We were running that system on "older" cluster hardware