Re: [OMPI users] Question about compatibility issues

2009-01-26 Thread Ted Yu
Hi: I'm new to this group.  I'm trying to implement a parallel quantum code called "Seqquest". I'm trying to figure out why there is an error in the implementation of this code where there is an error: This job has allocated 2 cpus Signal:11 info.si_errno:0(Success) si_code:1(SEGV_

Re: [OMPI users] Cannot compile on Linux Itanium system

2009-01-26 Thread Joe Griffin
Tony, I don't know what iac is. I use ias for my ASM code: ia64b <82> cd /opt/intel ia64b <83> find . -name 'iac' ia64b <84> find . -name 'ias' ./fc/10.1.012/bin/ias ./cc/10.1.012/bin/ias Anyway, if you want another data point and see if my compilers work I will gladly try to compile if you sen

Re: [OMPI users] open-mpi_1.3, intel ompi_info compiling errors

2009-01-26 Thread Ralph Castain
Strange. We have successfully built 1.3 using Intel 11.0 and earlier versions on RHEL5 and Fedora 9 (only 11.0, of course). Can you send your configure? Perhaps there is something different there. On Jan 26, 2009, at 1:44 PM, Scot Breitenfeld wrote: Hi, I'm trying to compile from source open

Re: [OMPI users] Newbie needs help! MPI_Wait/MPI_Start/MPI_Issend

2009-01-26 Thread Hartzman, Leslie D (MS)
>> In the original process 'A' code, prior to sending out a command, >> 'A' will issue an MPI_Wait to make sure that the command request >> instance is free. >> > I'm not quite sure I understand that statement. Can't you just > compare the request to MPI_REQUEST_NULL? From your description, it

Re: [OMPI users] Cannot compile on Linux Itanium system

2009-01-26 Thread Iannetti, Anthony C. (GRC-RTB0)
Jeff, I could successfully compile OpenMPI versions 1.2.X on Itanium Linux with the same compilers. I was never able to compile the 1.3 beta versions on IA64 Linux. Joe, I am using whatever assembler that ./configure provides. I believe it is icc. Should I set AS (I think) to iac?

[OMPI users] open-mpi_1.3, intel ompi_info compiling errors

2009-01-26 Thread Scot Breitenfeld
Hi, I'm trying to compile from source open-mpi-1.3r20295 on a suse linux 64-bit system (I also tried a 32-bit linux system, same problem). I'm using Intel compilers version 11.0 (and 10.1) for fortran, C/C++ (ifort, icc, icpc). The configure script completes with no errors, but when I do make it f

Re: [OMPI users] Handling output of processes

2009-01-26 Thread jody
That's cool then - i have written a shellscript which automatically does the xhost stuff for all nodes in my hostfile :) On Mon, Jan 26, 2009 at 9:25 PM, Ralph Castain wrote: > > On Jan 26, 2009, at 1:20 PM, jody wrote: > >> Hi Brian >> >>> >>> I would rather not have mpirun doing an xhost comman

[OMPI users] Open MPI 1.3 segfault on amd64 with Rmpi

2009-01-26 Thread Dirk Eddelbuettel
I am chasing a segfault when I use Open MPI (1.3) with Rmpi (0.5.6), the MPI add-on package for R that is written and maintained btyby Prof Hao Yu (CC'ed) I should prefix that the code runs just fine on 32bit Debian system at home. However, on amd64 running Ubuntu 8.10, I am seeing segfaults upon

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Ralph Castain
On Jan 26, 2009, at 1:20 PM, jody wrote: Hi Brian I would rather not have mpirun doing an xhost command - I think that is beyond our comfort zone. Frankly, if someone wants to do this, it is up to them to have things properly setup on their machine - as a rule, we don't mess with your

Re: [OMPI users] Handling output of processes

2009-01-26 Thread jody
Typo there: "xceren" stands for "screen" - sorry :) On Mon, Jan 26, 2009 at 9:20 PM, jody wrote: > Hi Brian > >> >> I would rather not have mpirun doing an xhost command - I think that is >> beyond our comfort zone. Frankly, if someone wants to do this, it is up to >> them to have things properly

Re: [OMPI users] Handling output of processes

2009-01-26 Thread jody
Hi Brian > > I would rather not have mpirun doing an xhost command - I think that is > beyond our comfort zone. Frankly, if someone wants to do this, it is up to > them to have things properly setup on their machine - as a rule, we don't > mess with your machine's configuration. Makes sys admins u

Re: [OMPI users] Cannot compile on Linux Itanium system (Jeff Squyres)

2009-01-26 Thread Iannetti, Anthony C. (GRC-RTB0)
Jeff, I could compile OpenMPI versions 1.2.X on Itanium Linux with the same compilers. Thanks, Tony

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Ralph Castain
Hi Jody I would rather not have mpirun doing an xhost command - I think that is beyond our comfort zone. Frankly, if someone wants to do this, it is up to them to have things properly setup on their machine - as a rule, we don't mess with your machine's configuration. Makes sys admins ups

Re: [OMPI users] Heterogeneous OpenFabrics hardware

2009-01-26 Thread Jeff Squyres
This scenario was not mentioned, but I'll bet it falls into the same general category. If an HCA has different run-time characteristics, regardless of whether they are caused by the OEM or the reseller, that's probably "heterogeneous enough" for this discussion. On Jan 26, 2009, at 2:41 P

Re: [OMPI users] Cannot compile on Linux Itanium system

2009-01-26 Thread Joe Griffin
Tony, I have a couple questions ... 1. It looks like you are creating atomic-asm.o with icc and not "ias". Is that correct? libtool: compile: icc -DHAVE_CONFIG_H -I. -I../../opal/include -I../../orte/include -I../../ompi/include -I../../opal/mca/paffinity/linux/plpa/src/libplpa -

Re: [OMPI users] Heterogeneous OpenFabrics hardware

2009-01-26 Thread Don Kerr
Jeff, Did IWG say anything about there being a chip set issue?Example what if a vender, say Sun, wraps Mellanox chips and on its own HCAs, would Mellanox HCA and Sun HCA work together? -DON On 01/26/09 14:19, Jeff Squyres wrote: The Interop Working Group (IWG) of the OpenFabrics Allianc

Re: [OMPI users] MPI_THREAD_MULTIPLE not provided

2009-01-26 Thread Jeff Squyres
MPI_THREAD_MULTIPLE support in the 1.2 series is unfortunately pretty broken/non-existent. The v1.3 series has MPI point-to-point support for several networks with MPI_THREAD_MULTIPLE; check the README file. On Jan 26, 2009, at 9:21 AM, Ali Copey wrote: Hi, I'm trying to get multiple th

Re: [OMPI users] Cannot compile on Linux Itanium system

2009-01-26 Thread Jeff Squyres
That's fairly strange; were you able to build Open MPI v1.2.x? I ask because the IA64 assembly hasn't changed between the two at all. On Jan 23, 2009, at 8:33 PM, Iannetti, Anthony C. (GRC-RTB0) wrote: Dear OpenMPI Users: I cannot compile OpenMPI 1.3 on my Itanium 2 system. Attached is

[OMPI users] Heterogeneous OpenFabrics hardware

2009-01-26 Thread Jeff Squyres
The Interop Working Group (IWG) of the OpenFabrics Alliance asked me to bring a question to the Open MPI user and developer communities: is anyone interested in having a single MPI job span HCAs or RNICs from multiple vendors? (pardon the cross-posting, but I did want to ask each group sep

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Ralph Castain
Yes. The --tag-output option will prepend [job,rank] (or stderr, whichever is appropriate) to each line. I don't insert a colon, though I suppose that would easily be done for grep purposes. I just finished implementing the --output-filename option that will split the output from each rank

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Douglas Guptill
Hello Ralph: Please forgive if this has already been covered... Have you considered prefixing each line of output from each process with something like "process_number" and a colon? That is what IBM's poe does. Separating the output is then easy: cat file | grep 0: > file.0 cat file | grep

Re: [OMPI users] Asynchronous behaviour of MPI Collectives

2009-01-26 Thread Jeff Squyres
Actually, I found out that the help message I pasted lies a little: the "number of buffers" parameter for both PP and SRQ types is mandatory, not optional. On Jan 23, 2009, at 2:59 PM, Jeff Squyres wrote: Here's a copy-n-paste of our help file describing the format of each: Per-peer receiv

Re: [OMPI users] Newbie needs help! MPI_Wait/MPI_Start/MPI_Issend

2009-01-26 Thread Jeff Squyres
On Jan 23, 2009, at 2:36 PM, Hartzman, Leslie D (MS) wrote: I’m trying to modify some code that is involved in point-to-point communications. Process A has a one way mode of communication with Process B. ‘A’ checks to see if its rank is zero and if so will send a “command” to ‘B’ (MPI_Issen

Re: [OMPI users] compile crash with pathscale and openmpi-1.3

2009-01-26 Thread Ralph Castain
FWIW: we build OMPI 1.3 under pathscale with -O3 without problem. However, we do not build the VT code, so it may only be a problem there. If you don't need VT, you might just configure to exclude that from the build. Ralph On Jan 26, 2009, at 10:16 AM, Jeff Squyres wrote: Yowza! Bumme

Re: [OMPI users] compile crash with pathscale and openmpi-1.3

2009-01-26 Thread Jeff Squyres
Yowza! Bummer. Please let us know what Pathscale says. On Jan 23, 2009, at 8:53 PM, Alain Miniussi wrote: FYI: I get the following problem when compiling openmpi-1.3 at -O2 and beyond: [alainm@rossini vtfilter]$pwd /misc/nice1/alainm/openmpi-1.3/ompi/contrib/vt/vt/tools/vtfilter [alain

Re: [OMPI users] Handling output of processes

2009-01-26 Thread Allen Barnett
On Sun, 2009-01-25 at 05:20 -0700, Ralph Castain wrote: > 2. redirect output of specified processes to files using the provided > filename appended with ".rank". You can do this for all ranks, or a > specified subset of them. A filename extension including both the comm size and the rank is h

Re: [OMPI users] Error compiling v1.3 with icc 10.1.021: PATH_MAX not defined

2009-01-26 Thread Jeff Squyres
Great; thanks! On Jan 26, 2009, at 4:11 AM, Andrea Iob wrote: Could you confirm that changing the last 3 files to use OMPI_PATH_MAX instead of PATH_MAX (without adding the #include) also fixes the problem? Yes, with OMPI_PATH_MAX the problem is also fixed. Andrea _

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS
Hi Jody, I think it is not a problem of MPI_Sends which doesn't match a corresponding MPI_Recvs, because all processes reach MPI_Finalize(). If not, at least one process would be blocked before reaching MPI_Finalize. Bernard jody a écrit : Hi Bernard The structure looks as far as i can s

[OMPI users] MPI_THREAD_MULTIPLE not provided

2009-01-26 Thread Ali Copey
Hi, I'm trying to get multiple thread running, and have openMPI 1.2.8 compiled with threading enabled: xxx@xxx:/usr/lib$ ompi_info | grep Thread   Thread support: posix (mpi: yes, progress: no) however, when I attempt to get MPI_THREAD_MULTIPLE, ...FUNNELED or ...SERIALIZED I am return

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-26 Thread Reuti
Am 25.01.2009 um 06:16 schrieb Sangamesh B: Thanks Reuti for the reply. On Sun, Jan 25, 2009 at 2:22 AM, Reuti wrote: Am 24.01.2009 um 17:12 schrieb Jeremy Stout: The RLIMIT error is very common when using OpenMPI + OFED + Sun Grid Engine. You can find more information and several remedie

Re: [OMPI users] Error compiling v1.3 with icc 10.1.021: PATH_MAX not defined

2009-01-26 Thread Andrea Iob
> Could you confirm that changing the last 3 files to > use OMPI_PATH_MAX > instead of PATH_MAX (without adding the #include) > also fixes the > problem? > Yes, with OMPI_PATH_MAX the problem is also fixed. Andrea

Re: [OMPI users] dead lock in MPI_Finalize

2009-01-26 Thread Bernard Secher - SFME/LGLS
Hello George, Thanks for your messages. Yes i disconnect my different worlds before calling MPI_Finalize(). Bernard George Bosilca a écrit : I was somehow confused when I wrote my last email and I mixed up the MPI versions (thanks to Dick Treumann for gently pointing me to the truth). Befor

Re: [OMPI users] Handling output of processes

2009-01-26 Thread jody
Hi I have written some shell scripts which ease the output to an xterm for each processor for normal execution(run_sh.sh), gdb (run_gdb.sh), and valgrind (run_vg.sh). In order for the xterms to be shown on your machine, you have to set the DISPLAY variable on every host (if this is not done by ssh