Re: [OMPI users] mpiblast only run in one node

2010-04-01 Thread longbow leo
Thanks a lot for your reply. Now the mpiblast run in only one node both inside and outside a torque job. How could I setup a hostlist for open mpi? I haven't found this in the open mpi faq. Thanks. the "ompi_info | grep tm" output is: MCA memory: ptmalloc2 (MCA v2.0, API v2.0, Co

Re: [OMPI users] Hide Abort output

2010-04-01 Thread Yves Caniou
For information, I use the debian-packaged OpenMPI 1.4.1. Cheers. .Yves. Le Wednesday 31 March 2010 12:41:34 Jeff Squyres (jsquyres), vous avez écrit : > At present there is no such feature, but it should not be hard to add. > > Can you guys be a little more specific about exactly what you are s

Re: [OMPI users] Hide Abort output

2010-04-01 Thread Yves Caniou
Indeed, the small paragraph can be misunderstood, but that wasn't the goal of my question. The fact is that the message can appear in the middle of logs, which means post-treatments of outputs even if the end of the program is normal (the worflow does not end by a join node). I just want to be

Re: [OMPI users] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-01 Thread Oliver Geisler
> However, reading through your initial description on Tuesday, none of these > fit: You want to actually measure the kernel time on TCP communication costs. > Since the problem occurs also on node only configuration and mca-option btl = self,sm,tcp is used, I doubt it has to do with TCP communi

Re: [OMPI users] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-01 Thread Rainer Keller
Hello, On Thursday 01 April 2010 12:16:25 pm Oliver Geisler wrote: > Does anyone know a benchmark program, I could use for testing? There's an abundance of benchmarks (IMB, netpipe, SkaMPI...) and performance analysis tools (Scalasca, Vampir, Paraver, Opt, Jumpshot). However, reading through you

Re: [OMPI users] Number of processes and spawn

2010-04-01 Thread Ralph Castain
Hi there! It will be in the 1.5.0 release, but not 1.4.2 (couldn't backport the fix). I understand that will come out sometime soon, but no firm date has been set. On Apr 1, 2010, at 4:05 AM, Federico Golfrè Andreasi wrote: > Hi Ralph, > > > I've downloaded and tested the openmpi-1.

Re: [OMPI users] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-01 Thread longbow leo
You could try MPI Test Tool (MTT, http://www.open-mpi.org/projects/mtt/). 2010/4/2 Oliver Geisler > Does anyone know a benchmark program, I could use for testing? > > > -- > This message has been scanned for viruses and > dangerous content by MailScanner, and is > believed to be clean. > > _

Re: [OMPI users] kernel 2.6.23 vs 2.6.24 - communication/wait times

2010-04-01 Thread Oliver Geisler
Does anyone know a benchmark program, I could use for testing? -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.

[OMPI users] openMPI-1.4.1 on Windows

2010-04-01 Thread NovA
Dear developers, I'm attempting to use openMPI 1.4.1 on Windows XP x64. Almost everything is working fine now, but in the process I've faced several problems and some of them remains... (1) There were problems to configure openMPI using latest CMake 2.8.1. Fortunately this was described in mail-l

Re: [OMPI users] mpiblast only run in one node

2010-04-01 Thread Jeff Squyres (jsquyres)
Are you running your job inside a torque job? If you don't, open mpi will not have a hostlist and will assume that it should launch everything on the localhost. If you are launching inside a torque job, ensure that ompi was build with torque support properly - run ompi_info | grep tm If you

Re: [OMPI users] Number of processes and spawn

2010-04-01 Thread Federico Golfrè Andreasi
Hi Ralph, I've downloaded and tested the openmpi-1.7a1r22817 snapshot, and it works fine for (multiple) spawning more than 128 processes. That fix will be included in the next release of OpenMPI, right ? Do you when it will be released ? Or where I can find that info ? Thank you,

[OMPI users] mpiblast only run in one node

2010-04-01 Thread longbow leo
Hi, I've installed Torque/Maui, Open MPI 1.4.1 and mpiBlast 1.6.0-beta1 in a linux Beowulf culster with 15 nodes (node1~15). Open MPI, mpiBlast were installed and my home directory lies in a directory "/data" which was shared by all nodes. Open Mpi was compiled with "--with-tm" to support Torque