[OMPI users] OpenMPI C++ examples of user defined MPI types (inherited classes)?

2006-06-13 Thread Jose Quiroga

Hi everybody,

Can anyone point me to some little C++ examples from
which to get the main idea of sending/receiving
messages containing user defined MPI types (MPI
inherited classes?)?

Thanks a lot.

JLQ.
MPI and OpenMPI newbe.


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 


[OMPI users] mpirun and ncurses

2006-06-13 Thread Ross Lance

I have been using termios.h to detect a keypress and then deal with it
inside of a loop and when porting it over to mpi, and using mpirun it now
will wait and the loop is paused waiting for a carrige return checking for a
keypress.

I then tried ncurses with nodelay() function and the loop continues but
still requires a return before it deals with the input inside of mpirun.
Where alone it would respond to a keypress without a return.

Both methods above are contained within if( rank == 0 ) { } and in a for(;;)

And I am using svn checkout of OpenMPI.

I would like to be able to press a key within a loop in main to change
values and exit the loop. Very easy with both methods above. But mpirun
seems to alter the behavior.

Do any of you know of a method to acomplish this within an mpi application.
I want to loop forever and respong to keyboard input when it comes in or
shortly there after.

Ross


Re: [OMPI users] Where are the nightly tarballs?

2006-06-13 Thread Jeff Squyres (jsquyres)
Whoops!  A config change on our server accidentally resulted in anyone
going to the snapshot directories being bounced out back to the main web
page.  The technical details aren't interesting; I fixed it now.
 
Sorry about that -- thanks for reporting it to us!




From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Ken Mighell
Sent: Tuesday, June 13, 2006 1:46 PM
To: us...@open-mpi.org
Subject: [OMPI users] Where are the nightly tarballs?


Dear OpenMPI team, 

I was trying to pull over the latest nightly tarball from the
OpenMPI web site. 
Clicking on "Download" and then "Nightly snapshots" points to
the page
http://www.open-mpi.org/nightly/
which gives 3 links:
"1.0.x series"
"1.1.x series"
"Trunk"
which all point to the main OpenMPI page
(http://www.open-mpi.org/). 
So where is last night's tarball and what is its filename?

Best regards,

-Ken Mighell





[OMPI users] Where are the nightly tarballs?

2006-06-13 Thread Ken Mighell

Dear OpenMPI team,

I was trying to pull over the latest nightly tarball from the OpenMPI  
web site.

Clicking on "Download" and then "Nightly snapshots" points to the page
http://www.open-mpi.org/nightly/
which gives 3 links:
"1.0.x series"
"1.1.x series"
"Trunk"
which all point to the main OpenMPI page
(http://www.open-mpi.org/).
So where is last night's tarball and what is its filename?

Best regards,

-Ken Mighell




[OMPI users] OpenMPI, debugging, and Portland Group's pgdbg

2006-06-13 Thread Caird, Andrew J
Hello all,

I've read the thread "OpenMPI debugging support"
(http://www.open-mpi.org/community/lists/users/2005/11/0370.php) and it
looks like there is improved debugging support for debuggers other than
TV in the 1.1 series.

I'd like to use Portland Groups pgdbg.  It's a parallel debugger,
there's more information at http://www.pgroup.com/resources/docs.htm.

>From the previous thread on this topic, it looks to me like the plan for
1.1 and forward is to support the ability to launch the debugger "along
side" the application.  I don't know enough about either pgdbg or
OpenMPI to know if this is the best plan, but assuming that it is, is
there a way to see if it is happening?

I've tried this two ways, the first way doesn't seem to attach to
anything:


[acaird@nyx-login ~]$ ompi_info | head -2
Open MPI: 1.1a9r10177
   Open MPI SVN revision: r10177
[acaird@nyx-login ~]$ mpirun --debugger pgdbg --debug  -np 2 cpi
PGDBG 6.1-3 x86-64 (Cluster, 64 CPU)
Copyright 1989-2000, The Portland Group, Inc. All Rights Reserved.
Copyright 2000-2005, STMicroelectronics, Inc. All Rights Reserved.
PGDBG cannot open a window; check the DISPLAY environment variable.
Entering text mode.

pgdbg> list
ERROR: No current thread.

pgdbg> quit



and I've tried running the whole thing under pgdbg:


[acaird@nyx-login ~]$ pgdbg mpirun -np 2 cpi -s pgdbgscript
  { lots of mca_* loaded by ld-linux messages }
pgserv 8726: attach : attach 8720 fails
ERROR: New Process (PID 8720, HOST localhost) ATTACH FAILED.
ERROR: New Process (PID 8720, HOST localhost) IGNORED.
ERROR: cannot read value at address 0x59BFE8.
ERROR: cannot read value at address 0x59BFF0.
ERROR: cannot read value at address 0x59BFF8.
ERROR: New Process (PID 0, HOST unknown) IGNORED.
ERROR: cannot read value at address 0x2A959BBEC8.


and it hangs right there until I kill it.  The two variables in this
scenario are:
PGRSH=ssh and the contents of pgdbgscript are:


pgienv exe force
pgienv mode process
ignore 12
run



So, the short list of questions are:

1. Has anyone done this successfully before?
2. Am I making the right assumptions about how the debugger attaches to
the processes?
3. Is this the expected behavior for this set of options to mpirun?
4. Does anyone have any suggestions for other things I might try?

Thanks a lot.
--andy



Re: [OMPI users] pnetcdf and OpenMPI

2006-06-13 Thread Brian Barrett
On Tue, 2006-06-13 at 10:51 -0700, Ken Mighell wrote:
> On May 6, 2006, Dries Kimpe reported a solution to getting 
> pnetcdf to compile correctly with OpenMPI.
> A patch was given for the file
> mca/io/romio/romio/adio/common/flatten.c
> Has this fix been implemented in the nightly series?

Yes, this has been fixed in the v1.1 and trunk nightly builds and in the
1.1b1 beta release.  It looks like we never followed up on the user's
mailing list with that information, but we fixed it around the time he
posted that fix.

Brian




[OMPI users] orted not found in spawn job on remote node

2006-06-13 Thread Pak Lui
Hi, I am not sure if it's a real issue or not, but I run a user program 
that calls MPI_Comm_spawn to launch on a remote node, the parent 
processes got launched via ssh with no problem, but when the child 
processes want to launch, I got a message saying that orted is not 
found. I've only set my --prefix when I run my spawn job.


If I do the spawn job on local node only, it runs fine. And I can 
workaround the issue by manually setting the -mca pls_rsh_orted with the 
appropriate path.


But the FAQ says I don't need to put my PATH or LD_LIBRARY_PATH if I 
have prefix, so shouldn't the prefix_dir be stashed away for the spawn 
child?


==
on node1, I do mpirun on remote host:

% ./mpirun -np 4 -host node2 -prefix /ompi/trunk/builds/sparc32-g 
./mspawn2 -n 1 -C 5 -P 5

Password:
0: ./mspawn2: MPI_APPNUM = 1
2: ./mspawn2: MPI_APPNUM = 1
3: ./mspawn2: MPI_APPNUM = 1
1: ./mspawn2: MPI_APPNUM = 1
Password:
orted: Command not found.
^C^\Quit

--

Thanks,

- Pak Lui


[OMPI users] pnetcdf and OpenMPI

2006-06-13 Thread Ken Mighell

On May 6, 2006, Dries Kimpe reported a solution to getting
pnetcdf to compile correctly with OpenMPI.
A patch was given for the file
mca/io/romio/romio/adio/common/flatten.c
Has this fix been implemented in the nightly series?

-Ken Mighell




Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread George Bosilca

On Jun 13, 2006, at 11:07 AM, Brock Palen wrote:


Here are the results with -mca mpi_leave_pinned
The results are as expected,  they are exactly similar to the mpich  
results.  Thank you for the help.  I have attached a plot with all  
three and the raw data for anyones viewing pleasure.


We never doubt about that :)



Im still curious about does mpi_leave_pinned affect real jobs if  
its not included?  for the most part for large messages of the size  
used in this test will never be seen.  So the effect should be  
negligible?  Clarification?
Note I am a admin not a MPI programmer have very lax experience  
with real code.


If you want/can run some real applications with and without this flag  
and compare them it will be more than useful. We never went deeper  
than some benchmarks on this topic. Additional information is  
welcome ...


  Thanks,
george.







Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:38 AM, Brock Palen wrote:


Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This
ofcourse is a synthetic test using NetPipe.   For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:

Unlike mpich-gm, Open MPI does not keep the memory pinned by  
default.

You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that  
should be
the main reason what you're seeing a such degradation of  
performances.


If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
 george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:


I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )





Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread Brock Palen

Here are the results with -mca mpi_leave_pinned
The results are as expected,  they are exactly similar to the mpich  
results.  Thank you for the help.  I have attached a plot with all  
three and the raw data for anyones viewing pleasure.


Im still curious about does mpi_leave_pinned affect real jobs if its  
not included?  for the most part for large messages of the size used  
in this test will never be seen.  So the effect should be  
negligible?  Clarification?
Note I am a admin not a MPI programmer have very lax experience with  
real code.


bwMCA.o1985
Description: Binary data


bwMPICH.o1978
Description: Binary data


bwOMPI.o1979
Description: Binary data


Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:38 AM, Brock Palen wrote:


Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This
ofcourse is a synthetic test using NetPipe.   For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:


Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason what you're seeing a such degradation of  
performances.


If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
 george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:


I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )





Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread George Bosilca
Well, if there is no reuse in the application buffers, then the 2  
approaches will give the same results. Because of our pipelined  
protocol it might happens that we will reach even better performance  
for large messages. If there is buffer reuse, the mpich-gm approach  
will lead to better performances, but to more pinned memory. In fact,  
the main problem is not the memory, but the memory hooks (like the  
ones in the lib c) that need to be take care by the MPI library in  
order to notice when one of the already registered memory location is  
released (freed) by the user.


At least with our approach the user have the choice. By default we  
turn them off, but it is really easy for any user to turn on the  
pinned memory registration, if he/she think that his MPI application  
require it. There is a paper to be published this year at Euro PVM/ 
MPI which shows that for some *real* applications (like the NAS  
benchmarks) there is no real difference. But it is definitively for  
the ping-pong benchmark ...


  Thanks,
george.

On Jun 13, 2006, at 10:38 AM, Brock Palen wrote:


Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This
ofcourse is a synthetic test using NetPipe.   For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:


Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason what you're seeing a such degradation of  
performances.


If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
 george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:


I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )





Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


"Half of what I say is meaningless; but I say it so that the other  
half may reach you"

  Kahlil Gibran




Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread Brock Palen

Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace?  This  
ofcourse is a synthetic test using NetPipe.   For regular apps that  
move decent amounts of data but want low latency more.

Will that be affected?

Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:26 AM, George Bosilca wrote:


Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason what you're seeing a such degradation of performances.

If this does not solve your problem, can you please provide the new
performance as well as the output of the command "ompi_info --param
all all".

   Thanks,
 george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:


I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI
libs, but open mpi does not recover like mpich, and further on you
see a decreese in bandwidth for OMPI on gm.

I have attached in png  and the outputs from the test (there are
two for OMPI )





Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread George Bosilca
Unlike mpich-gm, Open MPI does not keep the memory pinned by default.  
You can force this by ading the "--mca mpi_leave_pinned 1" to your  
mpirun command or by adding it into the Open MPI configuration file  
as specified on the FAQ (section performance). I think that should be  
the main reason what you're seeing a such degradation of performances.


If this does not solve your problem, can you please provide the new  
performance as well as the output of the command "ompi_info --param  
all all".


  Thanks,
george.

On Jun 13, 2006, at 10:01 AM, Brock Palen wrote:

I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from  
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI  
libs, but open mpi does not recover like mpich, and further on you  
see a decreese in bandwidth for OMPI on gm.


I have attached in png  and the outputs from the test (there are  
two for OMPI )






Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Errors with MPI_Cart_create

2006-06-13 Thread Jeff Squyres (jsquyres)
Good to know -- thanks! 

> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Brock Palen
> Sent: Tuesday, June 13, 2006 10:18 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] Errors with MPI_Cart_create
> 
> After allot of work,  the same problem occurred with lam-7.1.1,  i  
> have passed this on to the vasp devs the best i could.  It does not  
> appear to be a OMPI problem.
> 
> Brock Palen
> Center for Advanced Computing
> bro...@umich.edu
> (734)936-1985
> 
> 
> On Jun 13, 2006, at 10:11 AM, Jeff Squyres ((jsquyres)) wrote:
> 
> > This type of error *usually* indicates a programming error, but in  
> > this
> > case, it's so non-specific that it's not entirely clear that this  
> > is the
> > case.
> >
> > The Vasp code seems to be not entirely open, so I can't try this  
> > myself.
> > Can you try running vasp through a debugger and putting a  
> > breakpoint in
> > MPI_Cart_create?  The routine itself is not very long -- 
> you should be
> > able to step through it and see where it is generating this error.
> > Having a back trace of where this error is occurring would be  
> > extremely
> > helpful in diagnosing the real problem.
> >
> >
> >> -Original Message-
> >> From: users-boun...@open-mpi.org
> >> [mailto:users-boun...@open-mpi.org] On Behalf Of Brock Palen
> >> Sent: Thursday, June 08, 2006 2:38 PM
> >> To: Open MPI Users
> >> Subject: [OMPI users] Errors with MPI_Cart_create
> >>
> >> I have build vasp (http://cms.mpi.univie.ac.at/vasp/)  for a user
> >> using openMPI-1.0.2 with teh PGI 6.1 compilers,  At runtime I am
> >> getting the following OMPI errors,
> >>
> >> bash-3.00$ mpirun -np 2  vasp
> >> running on2 nodes
> >> [nyx.engin.umich.edu:16167] *** An error occurred in 
> MPI_Cart_create
> >> [nyx.engin.umich.edu:16167] *** on communicator MPI_COMM_WORLD
> >> [nyx.engin.umich.edu:16167] *** MPI_ERR_OTHER: known error not in  
> >> list
> >> [nyx.engin.umich.edu:16167] *** MPI_ERRORS_ARE_FATAL (goodbye)
> >>
> >>
> >> This is regular OMPI error messages,  is this a problem with the
> >> build? (its very complicated)  or with vasp as writen? Or OMPI?
> >> Direction is very much appreciated.
> >>
> >> Brock
> >> ___
> >> users mailing list
> >> us...@open-mpi.org
> >> http://www.open-mpi.org/mailman/listinfo.cgi/users
> >>
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



Re: [OMPI users] Errors with MPI_Cart_create

2006-06-13 Thread Brock Palen
After allot of work,  the same problem occurred with lam-7.1.1,  i  
have passed this on to the vasp devs the best i could.  It does not  
appear to be a OMPI problem.


Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


On Jun 13, 2006, at 10:11 AM, Jeff Squyres ((jsquyres)) wrote:

This type of error *usually* indicates a programming error, but in  
this
case, it's so non-specific that it's not entirely clear that this  
is the

case.

The Vasp code seems to be not entirely open, so I can't try this  
myself.
Can you try running vasp through a debugger and putting a  
breakpoint in

MPI_Cart_create?  The routine itself is not very long -- you should be
able to step through it and see where it is generating this error.
Having a back trace of where this error is occurring would be  
extremely

helpful in diagnosing the real problem.



-Original Message-
From: users-boun...@open-mpi.org
[mailto:users-boun...@open-mpi.org] On Behalf Of Brock Palen
Sent: Thursday, June 08, 2006 2:38 PM
To: Open MPI Users
Subject: [OMPI users] Errors with MPI_Cart_create

I have build vasp (http://cms.mpi.univie.ac.at/vasp/)  for a user
using openMPI-1.0.2 with teh PGI 6.1 compilers,  At runtime I am
getting the following OMPI errors,

bash-3.00$ mpirun -np 2  vasp
running on2 nodes
[nyx.engin.umich.edu:16167] *** An error occurred in MPI_Cart_create
[nyx.engin.umich.edu:16167] *** on communicator MPI_COMM_WORLD
[nyx.engin.umich.edu:16167] *** MPI_ERR_OTHER: known error not in  
list

[nyx.engin.umich.edu:16167] *** MPI_ERRORS_ARE_FATAL (goodbye)


This is regular OMPI error messages,  is this a problem with the
build? (its very complicated)  or with vasp as writen? Or OMPI?
Direction is very much appreciated.

Brock
___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






Re: [OMPI users] gm bandwidth results disappointing

2006-06-13 Thread Galen M. Shipman

Hi Brock,

You may wish to try running with the runtime option:

-mca mpi_leave_pinned 1

This turns on registration caching  and such..

- Galen

On Jun 13, 2006, at 8:01 AM, Brock Palen wrote:

I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from  
mryicom and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI  
libs, but open mpi does not recover like mpich, and further on you  
see a decreese in bandwidth for OMPI on gm.


I have attached in png  and the outputs from the test (there are  
two for OMPI )






Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Errors with MPI_Cart_create

2006-06-13 Thread Jeff Squyres (jsquyres)
This type of error *usually* indicates a programming error, but in this
case, it's so non-specific that it's not entirely clear that this is the
case.

The Vasp code seems to be not entirely open, so I can't try this myself.
Can you try running vasp through a debugger and putting a breakpoint in
MPI_Cart_create?  The routine itself is not very long -- you should be
able to step through it and see where it is generating this error.
Having a back trace of where this error is occurring would be extremely
helpful in diagnosing the real problem.


> -Original Message-
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of Brock Palen
> Sent: Thursday, June 08, 2006 2:38 PM
> To: Open MPI Users
> Subject: [OMPI users] Errors with MPI_Cart_create
> 
> I have build vasp (http://cms.mpi.univie.ac.at/vasp/)  for a user  
> using openMPI-1.0.2 with teh PGI 6.1 compilers,  At runtime I am  
> getting the following OMPI errors,
> 
> bash-3.00$ mpirun -np 2  vasp
> running on2 nodes
> [nyx.engin.umich.edu:16167] *** An error occurred in MPI_Cart_create
> [nyx.engin.umich.edu:16167] *** on communicator MPI_COMM_WORLD
> [nyx.engin.umich.edu:16167] *** MPI_ERR_OTHER: known error not in list
> [nyx.engin.umich.edu:16167] *** MPI_ERRORS_ARE_FATAL (goodbye)
> 
> 
> This is regular OMPI error messages,  is this a problem with the  
> build? (its very complicated)  or with vasp as writen? Or OMPI?
> Direction is very much appreciated.
> 
> Brock
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 



[OMPI users] gm bandwidth results disappointing

2006-06-13 Thread Brock Palen
I ran a test using openmpi-1.0.2 on OSX  vs mpich-1.2.6 from mryicom  
and i get lacking results from OMPI,
at point point there is a small drop in bandwidth for both MPI libs,  
but open mpi does not recover like mpich, and further on you see a  
decreese in bandwidth for OMPI on gm.


I have attached in png  and the outputs from the test (there are two  
for OMPI )


bwOMPI.o1969
Description: Binary data


bwOMPI.o1979
Description: Binary data


bwMPICH.o1978
Description: Binary data


Brock Palen
Center for Advanced Computing
bro...@umich.edu
(734)936-1985




Re: [OMPI users] Why does openMPI abort processes?

2006-06-13 Thread imran shaik
Hi brian,
Thanks,..that helps~

Imran

Brian Barrett  wrote: On Sun, 2006-06-11 at 04:26 -0700, 
imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>  
> Some times 2 processes, sometimes even more. Is it due to over load or
> program error?
>  
> Why does openMPI actually abort few processes?
>  
> Can anyone explain?

Generally, this is because multiple processes in your job aborted
(exited with a signal or before MPI_FINALIZE) and mpirun only prints the
first abort message.  You can modify how many abort status messages you
want to receive with the -aborted X option to mpirun, where X is the
number of process abort messages you want to see.  The message generally
includes some information on what happened to your process.


Hope this helps,

Brian


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 

Re: [OMPI users] Why does openMPI abort processes?

2006-06-13 Thread imran shaik
Hi brian,
Thanks,..that helps~

Imran

Brian Barrett  wrote: On Sun, 2006-06-11 at 04:26 -0700, 
imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>  
> Some times 2 processes, sometimes even more. Is it due to over load or
> program error?
>  
> Why does openMPI actually abort few processes?
>  
> Can anyone explain?

Generally, this is because multiple processes in your job aborted
(exited with a signal or before MPI_FINALIZE) and mpirun only prints the
first abort message.  You can modify how many abort status messages you
want to receive with the -aborted X option to mpirun, where X is the
number of process abort messages you want to see.  The message generally
includes some information on what happened to your process.


Hope this helps,

Brian


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


 __
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com