[OMPI users] orterun, orted, and chroot

2008-07-31 Thread Adam C Powell IV
Greetings, I can't get OpenMPI programs to run in a chroot environment on Debian. If I run the program, it dies as follows: # ./ex0 [workhorse:23752] [0,0,0] ORTE_ERROR_LOG: Error in file runtime/orte_init_stage1.c at line 312 -

Re: [OMPI users] Pathscale compiler and C++ bindings

2008-07-31 Thread Scott Beardsley
we might be running different OS's. I'm running RHEL 4U4 CentOS 5.2 here

Re: [OMPI users] Pathscale compiler and C++ bindings

2008-07-31 Thread Jeff Squyres
Lenny mentioned that it worked for him as well, but it definitely does not work for me. I'm not doing anything special as far as I know -- the only difference that I can think of is that we might be running different OS's. I'm running RHEL 4U4 (fairly ancient, but still fairly common).

[OMPI users] Pathscale compiler and C++ bindings

2008-07-31 Thread Scott Beardsley
I saw your comment regarding Pathscale compiled OMPI and thought I'd bring discussion over here. I'm attempting to reproduce the bug described in ticket 1326[1]. Using 1.2.6 (plus the MPI_CART_GET patch) with the 3.2 compiler. I'm using a hello++.cc actually written by Jeff and co. It seems t

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Jeff Squyres
Oh yes, definitely -- there was a problem interacting with openib and self until a day or two ago (much angst on the devel list and ticket 1378 until it was fixed ;-) ). On Jul 31, 2008, at 11:38 AM, Gabriele Fatigati wrote: I'm using 9005. I'll try last version. Thanks. 2008/7/31 Lenny V

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Gabriele Fatigati
I'm using 9005. I'll try last version. Thanks. 2008/7/31 Lenny Verkhovsky > try to use only openib > > make sure you use nightly after r19092 > > On 7/31/08, Gabriele Fatigati wrote: >> >> Mm, i've tried to disable shared memory but the problem remains. Is it >> normal? >> >> 2008/7/31 Jeff Squ

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Lenny Verkhovsky
try to use only openib make sure you use nightly after r19092 On 7/31/08, Gabriele Fatigati wrote: > > Mm, i've tried to disable shared memory but the problem remains. Is it > normal? > > 2008/7/31 Jeff Squyres > >> There is very definitely a shared memory bug on the trunk at the moment >> that

Re: [OMPI users] MPI_BCast problem on multiple networks.

2008-07-31 Thread David Robson
Sorry I should have given the version number. I'm running openmpi-1.2.4 on Fedora Core 6 Dave Adrian Knoth wrote: On Thu, Jul 31, 2008 at 03:26:09PM +0100, David Robson wrote: It also works if I disable the private interface. Otherwise there are no network problems. I can ping any host

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Gabriele Fatigati
Mm, i've tried to disable shared memory but the problem remains. Is it normal? 2008/7/31 Jeff Squyres > There is very definitely a shared memory bug on the trunk at the moment > that can cause hangs like this: > >https://svn.open-mpi.org/trac/ompi/ticket/1378 > > That being said, the v1.4 ni

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Gabriele Fatigati
Thanks Jeff, very quick reply :) 2008/7/31 Jeff Squyres > There is very definitely a shared memory bug on the trunk at the moment > that can cause hangs like this: > >https://svn.open-mpi.org/trac/ompi/ticket/1378 > > That being said, the v1.4 nightly is our normal development head, so all >

Re: [OMPI users] MPI_BCast problem on multiple networks.

2008-07-31 Thread Adrian Knoth
On Thu, Jul 31, 2008 at 03:26:09PM +0100, David Robson wrote: > It also works if I disable the private interface. Otherwise there > are no network problems. I can ping any host from any other. > openmpi programs without MPI_BCast work OK. Weird. > Has any seen anything like this, or have any i

Re: [OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Jeff Squyres
There is very definitely a shared memory bug on the trunk at the moment that can cause hangs like this: https://svn.open-mpi.org/trac/ompi/ticket/1378 That being said, the v1.4 nightly is our normal development head, so all the normal rules and disclaimers apply (it's *generally* stable,

[OMPI users] OpenMPI 1.4 nightly

2008-07-31 Thread Gabriele Fatigati
Dear OpenMPI users, i have installed OpenMPI 1.4 nigthly over IBM BLADE system with Infiniband. I have some problem with MPI applications. A simple MPI Hello world, doesn't function. After dispatch, every cpu works over 100% but doing nothing. The jobs appears locked. I compiled with --enable-mp

[OMPI users] MPI_BCast problem on multiple networks.

2008-07-31 Thread David Robson
Dear OpenMPI users I have a problem with openmpi codes hanging in MPI_BCast ... All our nodes are connected to one LAN. However, half of them also have an interface to a second private LAN. If the first openMPI process of a job starts on one of the dual-homed nodes, and a second process fr

Re: [OMPI users] Missing F90 modules

2008-07-31 Thread Scott Beardsley
Ashley Pittman wrote: Nothing to do with fortran but I think I'm right in saying a lot of these command line options aren't needed, you simply set --prefix and the rest of the options default to be relative to that. Ya, I stole it from the OFED rpmbuild log. I wanted to reproduce exactly what

Re: [OMPI users] Missing F90 modules

2008-07-31 Thread Ashley Pittman
On Wed, 2008-07-30 at 10:45 -0700, Scott Beardsley wrote: > I'm attempting to move to OpenMPI from another MPICH-derived > implementation. I compiled openmpi 1.2.6 using the following configure: > > ./configure --build=x86_64-redhat-linux-gnu > --host=x86_64-redhat-linux-gnu --target=x86_64-redh