This is the site you want (it's MPICH2, not MPICH1, but it's the same people):

    http://www.mcs.anl.gov/research/projects/mpich2/

Good luck!


On Apr 6, 2008, at 3:02 PM, brian janus wrote:
Brock,

Thanks much for the quick reply and information. I thought I might have been in the wrong place. :) After a google search for that list, I came up with several options. Do you happen to have a site URL or other link for the list your talking about? I want to make sure I'm on the right list

Thanks very much! :)

Brian.

On Sun, Apr 6, 2008 at 1:53 PM, Brock Palen <bro...@umich.edu> wrote:
This if for MPICH's mpirun not OpenMPI's mpi run. You will need to direct questions to the MPICH team and mailing list.

Also be aware that if that is for MPICH-1.x they nolonger develop it and should move to MPICH-2.x Or switch to another MPI stack like OpenMPI etc.

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985



On Apr 6, 2008, at 2:50 PM, brian janus wrote:
I'm new here so forgive me if I ask any dumb questions. But the first question I have concerns the mpirun script below.

My question is, what does the SYNCLOC=/bin/sync line for in the script below do, and what complications could arise (if any), from disabling this option by commenting out the line like #SYNCLOC=/bin/ sync. In some code we are running, we have found that disabling this option allows high priority jobs to complete in only a couple of seconds, whereas before they would take several minutes.

What does SYNCLOC do, and if its disabled, what kind of problems might that cause?

#! /bin/sh

# MPIRUN for MPICH
# (Don't change the above line - we use it to identify MPICH mpirun as
# opposed to others (e.g., SGI's mpirun)



#
# This script tries to start jobs on whatever kind of machine you're on.
# Strategy - This program is built with a default device it uses in
# certain ways. The user can override this default from the command line.



#
# This sh script is designed to use other scripts to provide the commands
# to run each system, using the . filename.sh mechanism
#
# Debuggers should be handled by running
# mpirun.db.<debugger_name>



# e.g., mpirun.db.gdb or mpirun.db.xxgdb.
# This will allow users to add there own debuggers
# (with -debug=<debugger_name>)
#
# Set default variables
AUTOMOUNTFIX="sed -e s@/tmp_mnt/@/@g"



DEFAULT_DEVICE=ch_p4
RSHCOMMAND="/usr/bin/ssh"
SYNCLOC=/bin/sync     # <---------- WE DISABLED THIS WITH A #COMMENT#
CC="cc"
COMM=


GLOBUSDIR=@GLOBUSDIR@

CLINKER="cc"
prefix=/cluster/cairo/software/mpich-1.2.5.2
bindir=/cluster/cairo/software/mpich-1.2.5.2/bin
# This value for datadir is the default value setup by configure


datadir=/cluster/cairo/software/mpich-1.2.5.2/share

DEFAULT_MACHINE=ch_p4
DEFAULT_ARCH=LINUX

# Derived variables
MPIRUN_BIN=$bindir
MPIRUN_HOME=$MPIRUN_BIN
MPIVERSION="1.2.5 (release) of : 2003/01/13 16:21:53"



#set verbose
#
# Local routines

#
# End of routine

#
#
# Special, system specific values
#
# polling_mode is for systems that can select between polling and
# interrupt-driven operation. Currently, only IBM POE is so supported



# (TMC CMMD has some support for this choice of mode)
polling_mode=1

# Parse command line arguments
# The ultimate goal is to determine what kind of parallel machine this
# is we are running on. Then we know how to start jobs...



#
# Process common arguments (currently does ALL, but should pass unrecognized
# ones to called files)
#
hasprinthelp=1
. $MPIRUN_HOME/mpirun.args
argsset=1

#
# Jump to the correct code for the device (by pseudo machine)



#
mpirun_version=""
case $machine in
    ch_cmmd)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_cmmd
        ;;
    ibmspx|ch_eui|ch_mpl)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_mpl
        ;;
    anlspx)



        mpirun_version=$MPIRUN_HOME/mpirun.anlspx
        ;;
    ch_meiko|meiko)
        mpirun_version=$MPIRUN_HOME/mpirun.meiko
        ;;
    cray_t3d|t3d)
        mpirun_version=$MPIRUN_HOME/mpirun.t3d
        ;;
    ch_nc)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_nc



        ;;
    paragon|ch_nx|nx)
        mpirun_version=$MPIRUN_HOME/mpirun.paragon
        ;;
    inteldelta)
        mpirun_version=$MPIRUN_HOME/mpirun.delta
        ;;
    i860|ipsc860)
        mpirun_version=$MPIRUN_HOME/mpirun.i860



        ;;
    p4|ch_p4|sgi_mp)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_p4
        ;;
    gm|ch_gm|myrinet)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_gm
        ;;
    execer)
        mpirun_version=$MPIRUN_HOME/mpirun.execer



        ;;
    ch_shmem|ch_spp|smp|convex_spp)
        # sgi_mp is reserved for the p4 version
        mpirun_version=$MPIRUN_HOME/mpirun.ch_shmem
        ;;
    ksr|symm_ptx)
        mpirun_version=$MPIRUN_HOME/mpirun.p4shmem


        ;;

    ch_tcp|tcp)
        mpirun_version=$MPIRUN_HOME/mpirun.ch_tcp
        ;;
    globus)
        mpirun_version=$MPIRUN_HOME/mpirun.globus
        ;;
    *)
        #
        # This allows us to add a device without changing the base mpirun



        # code
        if [ -x $MPIRUN_HOME/mpirun.$device ] ; then
            mpirun_version=$MPIRUN_HOME/mpirun.$device
        elif [ -x $MPIRUN_HOME/mpirun.$default_device ] ; then
            mpirun_version=$MPIRUN_HOME/mpirun.$default_device



            device=$default_device
        else
            echo "Cannot find MPIRUN machine file for machine $machine"
            echo "and architecture $arch ."
            if [ -n "$device" ] ; then


                echo "(Looking for $MPIRUN_HOME/mpirun.$device)"

            else
                echo "(No device specified.)"
            fi
            # . $MPIRUN_HOME/mpirun.default
            exit 1
        fi

        ;;

esac
exitstatus=1
if [ -n "$mpirun_version" ] ; then

    if [ -x $mpirun_version ] ; then
        # The mpirun script *must* set exitstatus (or exit itself)
        . $mpirun_version
    else


        echo "$mpirun_version is not available."
            exit 1

    fi
else
    echo "No mpirun script for this configuration!"
    exit 1
fi
exit $exitstatus
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


--
Jeff Squyres
Cisco Systems

Reply via email to