On May 19, 2010, at 2:19 PM, Michael E. Thomadakis wrote:

> I would like to build OMPI V1.4.2 and make it available to our users at the 
> Supercomputing Center at TAMU. Our system is a 2-socket, 4-core Nehalem 
> @2.8GHz, 24GiB DRAM / node, 324 nodes connected to 4xQDR Voltaire fabric, 
> CentOS/RHEL 5.4.

Sorry for the delay in replying...

> 1) high-resolution timers: how do I specify the HRT linux timers in the
>       --with-timer=TYPE
>  line of ./configure ?

You shouldn't need to do anything; the "linux" timer component of Open MPI 
should get automatically selected.  You should be able to see this in the 
stdout of Open MPI's "configure", and/or if you run ompi_info | grep timer -- 
there should only be one entry: linux.

> 2) I have installed blcr V0.8.2 but when I try to built OMPI and I point to 
> the 
> full installation it complains it cannot find it. Note that I build BLCR with 
> GCC but I am building OMPI with Intel compilers (V11.1)

Can you be more specific here?

> 3) Does OMPI by default use SHM for intra-node message IPC but revert to IB 
> for 
> inter-node ?

Yes.  You can force this, but it's usually unnecessary:

   mpirun --mca btl sm,self,openib

sm: shared memory transport
self: process loopback transport (i.e., send to self; not send to others on the 
same host)
openib: OpenFabrics transport

> 4) How could I select the high-speed transport, say DAPL or OFED IB verbs ? 
> Is 
> there any preference as to the specific high-speed transport over QDR IB?

openib is the preferred Open MPI plugin (the name is somewhat outdated, but 
it's modern OpenFabrics verbs -- see 
http://www.open-mpi.org/faq/?category=openfabrics#why-openib-name).

> 5) When we launch MPI jobs via PBS/TORQUE do we have control on the task and 
> thread placement on nodes/cores ?

Yes.  Check out the man page for mpirun(1).

> 6) Can we suspend/restart cleanly OMPI jobs with the above scheduler ? Any 
> caveats on suspension / resumption of OMPI jobs ?

I'll let Josh handle this -- he's the checkpoint/restart guy.

> 7) Do you have any performance data comparing OMPI vs say MVAPICVHv2 and 
> IntelMPI ? This is not a political issue since I am groing to be providing 
> all 
> these MPI stacks to our users.

Heh; that's a loaded question no matter how you ask it.  ;-)

The truth is that every MPI will claim to be the greatest (you should see the 
marketing charts that Intel MPI puts out at the Sonoma OpenFabrics workshop 
every year!).  We're all on-par with each other for all the major metrics.  
Some MPI's will choose to certain metrics that others choose not to optimize -- 
so you can always find a benchmark that shows "this MPI is great and the others 
suck!!" (which is what the marketing guys capitalize on).  Each MPI has its 
benefits and drawbacks; we think Open MPI has great performance *and* a very 
large feature set that the other MPI's do not have.  These are among the 
reasons we continue to develop and extend Open MPI.

That's a non-answer way of saying that we don't really want to get in a 
benchmarks war here on a google-able mailing list. :-)  It is probably best to 
do a little benchmarking yourself with apps that you know, understand, and 
control, in your environment.  See what works best for you.  Be sure / be 
careful to run apples-to-apples comparisons; if you're running optimized 
variants of MPI x, be sure to also run optimized variants of MPI y and z, too.  
And so on.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to