"path_to_slurm_lib" is the name of the directory which holds the
 SLURM libraries. Try something like this to figure out where to find
 the SLURM libraries:
 
 $ locate libslurm.so
 /opt/slurm/lib64/libslurm.so
 /opt/slurm/lib64/libslurm.so.26
 /opt/slurm/lib64/libslurm.so.26.0.0
 $
 
 On my system, path_to_slurm_lib is /opt/slurm/lib64, so the compile
 command would be
 
 $ mpicc -L/opt/slurm/lib64 -lpmi ...
 
  (You might also check that directory to ensure that libpmi.so is
 there.)
 
 Andy
 
 On 11/27/2013 12:22 AM, Arjun J Rao
   wrote:
   Re: [slurm-dev] Re: Exact way to compile and run MVAPICH2
     job with SLURM
         How exactly to I link "SLURM's implementation of the
           PMI library" with my executable ?
         
         Which path must i give ?
       
       The documentation just mentions mpicc
       -L<path_to_slurm_lib> -lpmi ...
     
     I don't understand what exactly I should write in
     <path_to_slurm_lib>
     On Tue, Nov 26, 2013 at 7:06 PM,
       Jonathan Perkins <[email protected]>
       wrote:
             This depends on your cluster's setup.  If you built
               your MPI application on a shared filesystem that is
               available on each node then you do not need to
               broadcast the executables around and should be able to
               use the srun command directly.
               
               One other thing that you need to keep in mind is that
               the supporting libraries (such as MVAPICH2) also needs
               to be installed or available on each of the nodes as
               well (assuming you are using shared libraries).
             If you are having trouble getting MPI running with
             MVAPICH2 I suggest that you build a debug version
             without slurm support (initially) and use salloc -->
             scontrol show hostnames > hosts --> mpirun_rsh. 
             This should provide more output on failures.
             
             
http://mvapich.cse.ohio-state.edu/support/user_guide_mvapich2-2.0b.html#x1-1230009.1.11
                 On Tue, Nov 26, 2013 at 6:01 AM,
                   Arjun J Rao <[email protected]>
                   wrote:
                                     To run a job with MVAPICH2
                                       under SLURM, I configured
                                       SLURM with 
                                       ./configure --with-pm=no
                                       --with-pmi=slurm
                                     Then, I have a doubt with
                                     compiling MPI jobs. The SLURM
                                     documentation mentions that we
                                     need to link the slurm_lib with
                                     the executable during the
                                     compilation step
                                     mpicc -L<path to slurm
                                     lib> -lpmi HelloWorld.c
                                   I used the /usr/local/lib/slurm as
                                   the path to the SLURM lib and got
                                   the executable a.out
                                 After compilation, can I run the
                                 executables directly using the 
                                 srun -n4 --mpi=none a.out    command
                                 ? 
                               Or do I have to use 
                               salloc -N2 bash
                             
                             sbcast the executables around
                           
                           srun ..
                         Which is the correct way ? 
                 -- 
                 
                   Jonathan Perkins
                   
                   http://www.cse.ohio-state.edu/~perkinjo

Reply via email to