dear teachers:
Regardless of any mpi implementation , there is always a command
named mpirun. And correspondingly there is a source file called mpirun.c.(at
least in lam/mpi),but i can not find this file in openmpi. can you tell me how
to produce this command in openmpi.
Hi Yaohui,
can you tell me the version of your gcc and g++ compiler?
It seems to me that your g++ compiler is older (<4.2) than your gcc compiler.
If that's true, then we have to enhance the VT configure, so that the
availability of '-fopenmp' for g++ will be tested.
Matthias
On Monday 05 Apri
On Apr 6 2010, luyang dong wrote:
Regardless of any mpi implementation , there
is always a command named mpirun. And correspondingly there is a source
file called mpirun.c.(at least in lam/mpi),but i can not find this file
in openmpi. can you tell me how to produce this command in openmpi.
N.M. Maclaren wrote:
On Apr 6 2010, luyang dong wrote:
Regardless of any mpi implementation , there is always a command
named mpirun. And correspondingly there is a source file called
mpirun.c.(at least in lam/mpi),but i can not find this file in
openmpi. can you tell me how to produce thi
Hello Devel-List,
I am a little bit helpless about this matter. I already posted in the
user list. In case you don't read the users list, I post in here.
This is the original posting:
http://www.open-mpi.org/community/lists/users/2010/03/12474.php
Short:
Switching from kernel 2.6.23 to 2.6.24 (
Hello Oliver,
Hmm, this is really a teaser...
I haven't seen such a drastic behavior, and haven't read of any on the list.
One thing however, that might interfere is process binding.
Could You make sure, that processes are not bound to cores (default in 1.4.1):
with mpirun --bind-to-none
Just an
On 4/6/2010 10:11 AM, Rainer Keller wrote:
> Hello Oliver,
> Hmm, this is really a teaser...
> I haven't seen such a drastic behavior, and haven't read of any on the list.
>
> One thing however, that might interfere is process binding.
> Could You make sure, that processes are not bound to cores (
On 4/1/2010 12:49 PM, Rainer Keller wrote:
> On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote:
>> Does anyone know a benchmark program, I could use for testing?
> There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance
> analysis tools (Scalasca, Vampir, Paraver, Op
Sorry for the delay -- I just replied on the user list -- I think the first
thing to do is to establish baseline networking performance and see if that is
out of whack. If the underlying network is bad, then MPI performance will also
be bad.
On Apr 6, 2010, at 11:51 AM, Oliver Geisler wrote:
On 4/6/2010 2:54 PM, Jeff Squyres wrote:
> Sorry for the delay -- I just replied on the user list -- I think the first
> thing to do is to establish baseline networking performance and see if that
> is out of whack. If the underlying network is bad, then MPI performance will
> also be bad.
>
On Apr 6, 2010, at 4:29 PM, Oliver Geisler wrote:
> > Sorry for the delay -- I just replied on the user list -- I think the first
> > thing to do is to establish baseline networking performance and see if that
> > is out of whack. If the underlying network is bad, then MPI performance
> > will
On 4/6/2010 2:54 PM, Jeff Squyres wrote:
> Sorry for the delay -- I just replied on the user list -- I think the first
> thing to do is to establish baseline networking performance and see if that
> is out of whack. If the underlying network is bad, then MPI performance will
> also be bad.
>
>
On Apr 6, 2010, at 6:04 PM, Oliver Geisler wrote:
> Using netpipe and comparing tcp and mpi communication I get the
> following results:
>
> TCP is much faster than MPI, approx. by factor 12
> e.g a packet size of 4096 bytes deliveres in
> 97.11 usec with NPtcp and
> 15338.98 usec with NPmpi
> or
I see your point, but the cons make sink the idea for me.
How about a compromise -- write up a scripty-foo to automatically download and
build some of the more common benchmarks? This still makes it a trivial
exercise for the user, but it avoids us needing to bundle already-popular
benchmarks
14 matches
Mail list logo