Re: [OMPI users] [OT] : Programming on PS3 Cell BE chip ?

2009-09-02 Thread Elvedin Trnjanin
I am not a game developer, but I don't think any games are actually 
using MPI in general, let alone to communicate with the SPEs (if that 
isn't what you were implying, my mistake - I apologize). If you're 
interested in learning about programming on the Cell BE, take a look at 
IBM's "Redbook" on it. The PDF is free and is a wonderful resource that 
I wish I had when starting to work on the CBE.


http://www.redbooks.ibm.com/abstracts/sg247575.html

Keeping in mind that I'm not a game developer, what the SPUs are used 
for is the game physics and AI so there could be some shared data worked 
on in a "parallel way". Not all games use all of the SPUs so those games 
could be multi-core sequential like the Xbox. I found a post that 
specifies the SPU usage per game -


http://www.neogaf.com/forum/showpost.php?p=7598043=1

Ashika Umanga Umagiliya wrote:
Are all the commercial PS3 games developed in "parallel way".(unlike 
sequential like XBox development) ?
Do the developers have *think* in parallel way and use all the MPI_* 
like commands to communicate with  SPEs ?






Re: [OMPI users] VMware and OpenMPI

2009-08-27 Thread Elvedin Trnjanin
This e-mail probably will not help you too much, but I'm pretty sure 
1.2.6 worked just fine as I ran a simple MPI program with OpenMPI 1.2.6 
and Debian Etch on top of ESX 4 without issue.


Lenny Verkhovsky wrote:

Hi all,
Does OpenMPI support VMware ?
I am trying to run OpenMPI 1.3.3 on VMware and it got stacked during 
OSU benchmarks and IMB.

looks like random deadlock, I wander if anyone have ever tried it ?
thanks,
Lenny.


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] SHARED Memory----------------

2009-04-23 Thread Elvedin Trnjanin
Shared memory is used for send-to-self scenarios such as if you're 
making use of multiple slots on the same machine.


shan axida wrote:

Hi,

Any body know how to make use of shared memory in OpenMPI implementation?

Thanks




___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] Open MPI 2009 released

2009-04-02 Thread Elvedin Trnjanin

Eugene,

This is a joke, right?


OpenMPI has had that since the 1.2 line.

Eugene Loh wrote:
Ah.  George, you should have thought about that.  I understand your 
eagerness to share this exciting news, but perhaps an April-1st 
announcement detracted from the seriousness of this grand development.


Here's another desirable MPI feature.  People talk about "error 
detection/correction".  We should do that in MPI.  Decide what the 
user *meant* to do, and do that "under the hood" rather than what they 
actually programmed.  Now, *that* would be a competitive advantage!  
(E.g., I'm spending a lot of time trying to get good latency 
measurements... why can't OMPI just detect that and give me good 
latency measurements!)


Anthony Thevenin wrote:



does anymone think of an April fool's day???

George Bosilca wrote:


The Open MPI Team, representing a consortium of bailed-out banks, car
manufacturers, and insurance companies, is pleased to announce the
release of the "unbreakable" / bug-free version Open MPI 2009,
(expected to be available by mid-2011).  This release is essentially a
complete rewrite of Open MPI based on new technologies such as C#,
Java, and object-oriented Cobol (so say we all!).  Buffer overflows
and memory leaks are now things of the past.  We strongly recommend
that all users upgrade to Windows 7 to fully take advantage of the new
powers embedded in Open MPI.



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] WRF, OpenMPI and PGI 7.2

2009-02-19 Thread Elvedin Trnjanin
That would be one way it dies, but we kept getting errors during 
compilation without the compilation process exiting which is arguably 
worse than the behavior you saw.


OpenMPI's mpicc doesn't support the -cc flag so it just passes it to 
pgcc, which doesn't support it either. The easy way to fix it is 
recompiling OpenMPI with gcc and pgf


./configure CC=gcc CXX=g++ F77=pgf... FC=pgf... ...

After that, edit the WRF configure.wrf and remove every instance of 
-cc=${SOMEVAR} I think there should be two, but I don't have access to 
mine at the moment to tell you the exact names.


Gerry Creager wrote:

Elvedin,

Yeah, I thought about that after finding a reference to this in the 
archives, so I redirected the path to MPI toward the gnu-compiled 
version.  It died in THIS manner:

make[3]: Entering directory `/home/gerry/WRFv3/WRFV3/external/RSL_LITE'
mpicc -cc=gcc -DFSEEKO64_OK  -w -O3 -DDM_PARALLEL   -c c_code.c
pgcc-Error-Unknown switch: -cc=gcc
make[3]: [c_code.o] Error 1 (ignored)

Methinks the wrf configuration script and make file will need some 
tweeks.


Interesting thing: I have another system (alas, with mpich) where it 
compiles just fine.  I'm trying to sort this out, as on 2 systems, 
with openMPI, it does odd dances before dying.


I'm still trying things.  I've gotta get this up both for MY research 
and to support other users.


Thanks, Gerry

Elvedin Trnjanin wrote:
WRF almost requires that you use gcc for the C/C++ part and the PGI 
Fortran compilers, if you choose that option. I'd suggest compiling 
OpenMPI in the same way as that has resolved our various issues. Have 
you tried that with the same result?


Gerry Creager wrote:

Howdy,

I'm new to this list.  I've done a little review but likely missed 
something specific to what I'm asking.  I'll keep looking but need 
to resolve this soon.


I'm running a Rocks cluster (centos 5), with PGI 7.2-3 compilers, 
Myricom MX2 hardware and drivers, and OpenMPI1.3


I installed the Myricom roll which has OpenMPI compiled with gcc.  I 
recently compiled the openmpi code w/ PGI.


I've the MPICH_F90 pointing to the right place, and we're looking 
for the right includes and libs by means of LD_LIBRARY_PATH, etc.


When I tried to run, I got the following error:
make[3]: Entering directory `/home/gerry/WRFv3/WRFV3/external/RSL_LITE'
mpicc  -DFSEEKO64_OK  -w -O3 -DDM_PARALLEL   -c c_code.c
PGC/x86-64 Linux 7.2-3: compilation completed with warnings
mpicc  -DFSEEKO64_OK  -w -O3 -DDM_PARALLEL   -c buf_for_proc.c
PGC-S-0036-Syntax error: Recovery attempted by inserting identifier 
.Z before '(' (/share/apps/openmpi-1.3-pgi/include/mpi.h: 889)
PGC-S-0082-Function returning array not allowed 
(/share/apps/openmpi-1.3-pgi/include/mpi.h: 889)
PGC-S-0043-Redefinition of symbol, MPI_Comm 
(/share/apps/openmpi-1.3-pgi/include/mpi.h: 903)

PGC/x86-64 Linux 7.2-3: compilation completed with severe errors
make[3]: [buf_for_proc.o] Error 2 (ignored)

Note that I had modified the makefile to use PGI in place of gcc, 
and thus, the PGI-compiled openMPI.


Thanks, Gerry


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users






[OMPI users] Name Mangling

2008-12-04 Thread Elvedin Trnjanin
I'm using OpenMPI 1.2.5 and PGI 7.1.5 compiler suite to get CLM 3.5 
working correctly. When compiling for OpenMPI, I encounter the following 
snippet of errors -


areaMod.o(.text+0x98a0): In function `areamod_map_checkmap_':
: undefined reference to `mpi_reduce_'
areaMod.o(.text+0x9b6c): In function `areamod_map_checkmap_':
: undefined reference to `mpi_reduce_'
areaMod.o(.text+0x9c39): In function `areamod_map_checkmap_':
: undefined reference to `mpi_reduce_'
areaMod.o(.text+0x9ea2): more undefined references to `mpi_reduce_'


When compiling for MPICH2, it works just fine. I assume this is going to lead 
to recompiling OpenMPI so I am wondering which PGI name mangling options to 
pass either the OpenMPI compile, or CLM compile to get the names in order?

Thanks,
Elvedin



Re: [OMPI users] setup of a basic system on RHEL or Fedora

2008-04-04 Thread Elvedin Trnjanin
I neglected to consider that. My apologies to Terry Frankcombe 
 as he was correct. Now I have a follow up 
question; how do we set the non-interactive path on a per user basis? 
Although the shell used is my default bash, it seems my .bashrc or 
.bash_login are not read in non-interactive mode.


George Bosilca wrote:
Looks like an interactive vs. non-interactive PATH problem. Please do 
a "ssh node02 printenv" and see if you get what you expect in the PATH.


  george.





Re: [OMPI users] setup of a basic system on RHEL or Fedora

2008-04-03 Thread Elvedin Trnjanin

http://www.open-mpi.org/software/ompi/v1.2/

Download either the gzip or bzip, extract it, then "./configure" and 
"make all install" is pretty simple. The library will go into 
/usr/local/lib so you might need to add that path to your linker. You 
can do this on all three systems. OpenMPI will handle everything else as 
all you need is gigabit Ethernet.


Running programs is done with mpirun and you can look at th manual page 
to get the arguments to that. The next important option is the 
"-machinefile" option for mpirun where you can specify all the hosts you 
want to connect to. I suggest setting up SSH authorized keys/auto login 
on each system as mpirun will go over SSH and will ask you to login 
whenever you run a program.


One thing about running programs is that the binaries need to be in the 
same absolute path on all systems. This means if you're running the 
program from /home/me on system1, the program you're running must also 
be in /home/me on all the other systems. OpenMPI will not transfer those 
binaries for you. An easy way for this is have an NFS mount for your MPI 
programs that all of the systems can access and run from there. The 
system specs make no difference as long as you're not going to switch to 
a high speed interconnect soon.


That should get you started, the documents on the OpenMPI web site 
should give you the answers for everything else.  
http://www.open-mpi.org/projects/user-docs/


clark...@clarktx.com wrote:
I am looking for the basic steps for setup of an MPI cluster on a RHEL 
or Fedora system with mpi-1.1.

IBM used to have a tutorial on this but I cannot find a complete one now.
I have 3 white box computers which I would like to setup and run basic 
programs on and start working with MPI.
I currently plan to just set them up on a gigabit network.  All three 
are dual core if that makes a difference and I might get another one 
which is quad core in the near future.
I have found quite a few books on programming but not very much on the 
setup.  Judging by the size of the tutorial that was on IBM site 
someone has done a good job of making this simple and easy, but I 
haven't found the basic information on where to setup information on 
nodes and the like.




[OMPI users] Loopback Communication

2008-02-29 Thread Elvedin Trnjanin
I'm using a "ping pong" program to approximate bandwidth and latency of 
various messages sizes and I notice when doing various transfers (eg. 
async) that the maximum bandwidth isn't the system's maximum bandwidth. 
I've looked through the FAQ and I haven't noticed this being covered but 
how does OpenMPI handle loopback communication? Is it still over a 
network interconnect or some sort of shared memory copy?


- Elvedin