I have been using termios.h to detect a keypress and then deal with it
inside of a loop and when porting it over to mpi, and using mpirun it now
will wait and the loop is paused waiting for a carrige return checking for a
keypress.
I then tried ncurses with nodelay() function and the loop
Dear OpenMPI team,
I was trying to pull over the latest nightly tarball from the OpenMPI
web site.
Clicking on "Download" and then "Nightly snapshots" points to the page
http://www.open-mpi.org/nightly/
which gives 3 links:
"1.0.x series"
"1.1.x series"
"Trunk"
which all point to the main
Hello all,
I've read the thread "OpenMPI debugging support"
(http://www.open-mpi.org/community/lists/users/2005/11/0370.php) and it
looks like there is improved debugging support for debuggers other than
TV in the 1.1 series.
I'd like to use Portland Groups pgdbg. It's a parallel debugger,
On Tue, 2006-06-13 at 10:51 -0700, Ken Mighell wrote:
> On May 6, 2006, Dries Kimpe reported a solution to getting
> pnetcdf to compile correctly with OpenMPI.
> A patch was given for the file
> mca/io/romio/romio/adio/common/flatten.c
> Has this fix been implemented in the nightly series?
Yes,
Hi, I am not sure if it's a real issue or not, but I run a user program
that calls MPI_Comm_spawn to launch on a remote node, the parent
processes got launched via ssh with no problem, but when the child
processes want to launch, I got a message saying that orted is not
found. I've only set my
On May 6, 2006, Dries Kimpe reported a solution to getting
pnetcdf to compile correctly with OpenMPI.
A patch was given for the file
mca/io/romio/romio/adio/common/flatten.c
Has this fix been implemented in the nightly series?
-Ken Mighell
Well, if there is no reuse in the application buffers, then the 2
approaches will give the same results. Because of our pipelined
protocol it might happens that we will reach even better performance
for large messages. If there is buffer reuse, the mpich-gm approach
will lead to better
Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace? This
ofcourse is a synthetic test using NetPipe. For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?
Brock Palen
Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason
Good to know -- thanks!
> -Original Message-
> From: users-boun...@open-mpi.org
> [mailto:users-boun...@open-mpi.org] On Behalf Of Brock Palen
> Sent: Tuesday, June 13, 2006 10:18 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Errors with MPI_Cart_create
>
> After allot of work,
After allot of work, the same problem occurred with lam-7.1.1, i
have passed this on to the vasp devs the best i could. It does not
appear to be a OMPI problem.
Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Jun 13, 2006, at 10:11 AM, Jeff Squyres
Hi Brock,
You may wish to try running with the runtime option:
-mca mpi_leave_pinned 1
This turns on registration caching and such..
- Galen
On Jun 13, 2006, at 8:01 AM, Brock Palen wrote:
I ran a test using openmpi-1.0.2 on OSX vs mpich-1.2.6 from
mryicom and i get lacking results from
This type of error *usually* indicates a programming error, but in this
case, it's so non-specific that it's not entirely clear that this is the
case.
The Vasp code seems to be not entirely open, so I can't try this myself.
Can you try running vasp through a debugger and putting a breakpoint in
I ran a test using openmpi-1.0.2 on OSX vs mpich-1.2.6 from mryicom
and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI libs,
but open mpi does not recover like mpich, and further on you see a
decreese in bandwidth for OMPI on gm.
I have
Hi brian,
Thanks,..that helps~
Imran
Brian Barrett wrote: On Sun, 2006-06-11 at 04:26 -0700,
imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>
> Some times 2 processes, sometimes even more. Is it
Hi brian,
Thanks,..that helps~
Imran
Brian Barrett wrote: On Sun, 2006-06-11 at 04:26 -0700,
imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>
> Some times 2 processes, sometimes even more. Is it
16 matches
Mail list logo