Patch looks good. Please also update the CHANGES file (this file reflects bullets for things that have happened since the core testers branch).

On Sep 15, 2008, at 6:15 PM, Tim Mattox wrote:

Hello,
Attached is a patchfile for the mtt trunk that adds a
--local-scratch <dir_name>
option to client/mtt.  You can also specify something like
this in your [MTT] ini section:
local_scratch = &shell("echo /tmp/`whoami`_mtt")

This local-scratch directory is then used for part of the --mpi- install
phase to speed up your run.  Specifically, the source-code of the
MPI is untarred there, configure is run, make all, and make check.
Then, when make install is invoked the MPI is installed into the
usual place as if you hadn't used --local-scratch.  If you don't
use --local-scratch, then the builds occur in the usual place that
they have before.

For the clusters at IU that seem to have slow NSF home directories,
this cuts the --mpi-install phase time in half.

The downside is that if the MPI build fails, your build directory is out
on some compile-node's /tmp and is harder to go debug.  But, since
mpi build failures are now rare, this should make for quicker turnaround
for the general case.

I think I adjusted the code properly for the vpath build case, but I've never used that so haven't tested it. Also, I adjusted the free disk space
check code.  Right now it only checks the free space on --scratch,
and won't detect if --local-scratch is full.  If people really care, I
could make it check both later.  But for now, if your /tmp is full
you probably have other problems to worry about.

Comments?  Can you try it out, and if I get no objections, I'd like
to put this into the MTT trunk this week.
--
Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/
tmat...@gmail.com || timat...@open-mpi.org
I'm a bright... http://www.the-brights.net/
<mtt-local- scratch.patch>_______________________________________________
mtt-users mailing list
mtt-us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users


--
Jeff Squyres
Cisco Systems

Reply via email to