To sum up and give an update:
The extended communication times while using shared memory communication
of openmpi processes are caused by openmpi session directory laying on
the network via NFS.
The problem is resolved by establishing on each diskless node a ramdisk
or mounting a tmpfs. By settin
> Ah, that could do it. Open MPI's shared memory files are under /tmp. So if
> /tmp is NFS, you could get extremely high latencies because of dirty page
> writes out through NFS.
>
> You don't necessarily have to make /tmp disk-full -- if you just make OMPI's
> session directories go into a
Quoting Ashley Pittman :
On 10 Apr 2010, at 04:51, Eugene Loh wrote:
Why is shared-memory performance about four orders of magnitude
slower than it should be? The processes are communicating via
memory that's shared by having the processes all mmap the same file
into their address spac
Sorry for replying late. Unfortunately I am not "full time
administrator". And I am going to be a conference next week, so please
be patient with me replying.
On 4/7/2010 6:56 PM, Eugene Loh wrote:
> Oliver Geisler wrote:
>
>> Using netpipe and comparing tcp and mpi c
On 4/6/2010 5:09 PM, Jeff Squyres wrote:
> On Apr 6, 2010, at 6:04 PM, Oliver Geisler wrote:
>
>> Further our benchmark started with "--mca btl tcp,self" runs with short
>> communication times, even using kernel 2.6.33.1
>
> I'm not sure what this statem
uration?
Thanks, Jeff, for insisting upon testing network performance.
Thanks all others, too ;-)
oli
> On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:
>
>> On 4/6/2010 10:11 AM, Rainer Keller wrote:
>>> Hello Oliver,
>>> Hmm, this is really a teaser...
>>>
also be bad.
>
Could make sense. With kernel 2.6.24 it seems a major change in the
modules for Intel PCI-Express network cards was introduced.
Does openmpi use TCP communication, even if all processes are on the
same local node?
>
> On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:
>
On 4/1/2010 12:49 PM, Rainer Keller wrote:
> On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
>> Does anyone know a benchmark program, I could use for testing?
> There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance
> analysis tools (Scala
one. All with the same result:
~20% cpu wait and lot longer over-all computation times.
Thanks for the idea ...
Every input is helpful.
Oli
> Just an idea...
>
> Regards,
> Rainer
>
> On Tuesday 06 April 2010 10:07:35 am Oliver Geisler wrote:
>> Hello Devel-List,
>
Hello Devel-List,
I am a little bit helpless about this matter. I already posted in the
user list. In case you don't read the users list, I post in here.
This is the original posting:
http://www.open-mpi.org/community/lists/users/2010/03/12474.php
Short:
Switching from kernel 2.6.23 to 2.6.24 (
10 matches
Mail list logo