Re: [OMPI devel] configure broken

2009-10-22 Thread Ralph Castain
Ah, yes - I see what you mean. Thanks for catching it! On Oct 22, 2009, at 7:22 PM, George Bosilca wrote: If you go at the top of the configure output, basically just after the detection of the revision, you will see this warning. Somehow, it doesn't stop configure from working, but it preve

Re: [OMPI devel] configure broken

2009-10-22 Thread George Bosilca
If you go at the top of the configure output, basically just after the detection of the revision, you will see this warning. Somehow, it doesn't stop configure from working, but it prevent it from doing what it was supposed to do as the two conditionals are having the wrong values. geor

Re: [OMPI devel] configure broken

2009-10-22 Thread Tim Mattox
I just fixed it in r22130. On Thu, Oct 22, 2009 at 9:16 PM, Ralph Castain wrote: > Most interesting - I have been building on Mac OSX both yesterday and today > with those changes without problem. I am on Snow Leopard, but I checked and > "true" is indeed in /usr/bin. > > I'm not seeing any warni

Re: [OMPI devel] configure broken

2009-10-22 Thread Ralph Castain
Most interesting - I have been building on Mac OSX both yesterday and today with those changes without problem. I am on Snow Leopard, but I checked and "true" is indeed in /usr/bin. I'm not seeing any warnings or problems. Perhaps a difference in configuration? Though I did also test it wit

[OMPI devel] configure broken

2009-10-22 Thread George Bosilca
There seems to be an issue with the latest changes on the configure scripts. A careful reading of the output of configure on MAC OS X shows a lot of errors/warnings, which leads to undefined AM_CONDITIONALS (PROJECT_OMPI_*). This apparently comes from configure.ac line 62 where the path to

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread David Singleton
Chris Samuel wrote: - "Ashley Pittman" wrote: $ grep Cpus_allowed_list /proc/$$/status Useful, ta! Does this imply the default is to report on processes in the current cpuset rather than the entire system? Does anyone else feel that violates the principal of least surprise? Not reall

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Chris Samuel
- "Ashley Pittman" wrote: > $ grep Cpus_allowed_list /proc/$$/status Useful, ta! > Does this imply the default is to report on processes > in the current cpuset rather than the entire system? > Does anyone else feel that violates the principal of > least surprise? Not really, I feel that

[OMPI devel] SVN/trac server needs to reboot, Oct 23, 8am

2009-10-22 Thread Jeff Squyres
FYI. Begin forwarded message: From: "Kim, DongInn" Date: October 22, 2009 1:46:47 PM EDT Subject: SVN/trac server needs to reboot, Oct 23, 8am We need to reboot SVN/trac server(sourcehaven) to replace RAM on Oct 23, Friday. Date: Friday Oct 23, 2009. Time: - 5:00am-5:30am Pacific US time -

Re: [OMPI devel] Stack traces and message queues in MTT results.

2009-10-22 Thread Ashley Pittman
Thanks! Ethan has committed the code to svn if other sites want to enable it, it might be worth considering having one of the test sites running debug code to be able to see variable values inside the MPI library. It also appears that message queue support is broken on the head, the DLL is compl

Re: [OMPI devel] Stack traces and message queues in MTT results.

2009-10-22 Thread Jeff Squyres
That rocks!! On Oct 22, 2009, at 6:19 AM, Ashley Pittman wrote: Thanks to Sun who have integrated padb with MTT there are now padb traces in the automated testing logs of OpenMPI for cases where the tests have timed out. These traces show the contents of MPI message queues, stack traces and w

Re: [OMPI devel] processor affinity -- OpenMPI / batch system integration

2009-10-22 Thread Rayson Ho
The code for the Job to Core Binding (aka. thread binding, or CPU binding) feature was checked into the Grid Engine project cvs. It uses OpenMPI's Portable Linux Processor Affinity (PLPA) library, and is topology and NUMA aware. The presentation from HPC Software Workshop '09: http://wikis.sun.com

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Jeff Squyres
I added tickets #21 and #22 about these features. https://svn.open-mpi.org/trac/hwloc/ticket/21 https://svn.open-mpi.org/trac/hwloc/ticket/22 Thanks! On Oct 22, 2009, at 5:54 AM, Ashley Pittman wrote: On Thu, 2009-10-22 at 11:05 +0200, Brice Goglin wrote: > Ashley Pittman wrote: > >

Re: [OMPI devel] A minor nit in the mpirun manpage

2009-10-22 Thread Terry Dontje
Paul H. Hargrove wrote: Sorry if this has been fixed for 1.3.4, but in the manpge for mpirun in 1.3.3 I noticed the following in the "MCA" section of the manpage: Note: The -mca switch is simply a shortcut for setting environment variables. The same effect may be accomplis

[OMPI devel] Stack traces and message queues in MTT results.

2009-10-22 Thread Ashley Pittman
Thanks to Sun who have integrated padb with MTT there are now padb traces in the automated testing logs of OpenMPI for cases where the tests have timed out. These traces show the contents of MPI message queues, stack traces and where available local variables and parameters to functions. An exam

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Ashley Pittman
On Thu, 2009-10-22 at 11:05 +0200, Brice Goglin wrote: > Ashley Pittman wrote: > > Does this imply the default is to report on processes in the current > > cpuset rather than the entire system? Does anyone else feel that > > violates the principal of least surprise? > Yes, by default, it's the c

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Brice Goglin
Ashley Pittman wrote: >> [csamuel@tango069 ~]$ ~/local/hwloc/0.9.1rc2/bin/lstopo >> System(31GB) >> Node#0(15GB) + Socket#0 + L3(6144KB) + L2(512KB) + L1(64KB) + Core#0 + P#0 >> Node#1(16GB) + Socket#1 + L3(6144KB) >> L2(512KB) + L1(64KB) + Core#0 + P#4 >> L2(512KB) + L1(64KB) + Core#1

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Ashley Pittman
On Thu, 2009-10-22 at 10:37 +1100, Chris Samuel wrote: > - "Chris Samuel" wrote: > > > Some sample results below for configs not represented > > on the current website. > > A final example of a more convoluted configuration with > a Torque job requesting 5 CPUs on a dual Shanghai node > and

Re: [OMPI devel] 0.9.1rc2 is available

2009-10-22 Thread Chris Samuel
- "Tony Breeds" wrote: > Powerpc kernels that old do not have the topology information needed > (in /sys or /proc/cpuinfo) So for the short term that's be best we > can do. That's fine, I quite understand. I'm trying to get that cluster replaced anyway.. ;-) > FWIW I'm looking at how we

[OMPI devel] A minor nit in the mpirun manpage

2009-10-22 Thread Paul H. Hargrove
Sorry if this has been fixed for 1.3.4, but in the manpge for mpirun in 1.3.3 I noticed the following in the "MCA" section of the manpage: Note: The -mca switch is simply a shortcut for setting environment variables. The same effect may be accomplished by setting correspon