Dear Professor Blaha,

Thanks so much for your comment! With the two simple changes, I made 
parallelization work for the first time on our cluster. 

Other users can consult my compiling and linking options if want to go parallel 
as well, they work well for intel systems. 

Cheers,
Wei
On Oct 11, 2010, at 12:45 AM, Peter Blaha wrote:

> At least two big mistakes:
> 
> We wrote a submission script to create the .machines file and calculate the 
> number of processors allocated ($nprocs) on the fly and start the calculation
>> with: /mpirun -np $nprocs runsp_lapw -p -ec 0.0001 -cc 0.0001/. We enabled 
>> hybrid parallelization (i.e., both k-point and MPI) in this case.
> 
> You cannot submit a shell-script (runsp_lapw) with mpi.
> 
> Your job must just call     runsp ......       and the script will call 
> mpirun when starting lapw0/1/2 (depending on the .machines file)
> 
> 
> 
>> The .machines file created reads:
>> 1:r1i0n0:8
>> 1:r1i0n1:8
>> lapw0: r1i0n0:8 r1i0n1:8
>> lapw1: r1i0n0:8 r1i0n1:8
>> lapw2: r1i0n0:8 r1i0n1:8
>> granularity:1
>> extrafine:1
> 
> Please check the UG. The lines starting with lapw1 and lapw2 must be omitted. 
> Only the
> lapw0: line     and
> 1:...  lines are allowed (besides granularity,extrafine or the vectorsplit 
> line (see UG))
> 
> 
> -- 
> -----------------------------------------
> Peter Blaha
> Inst. Materials Chemistry, TU Vienna
> Getreidemarkt 9, A-1060 Vienna, Austria
> Tel: +43-1-5880115671
> Fax: +43-1-5880115698
> email: pblaha at theochem.tuwien.ac.at
> -----------------------------------------
> _______________________________________________
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien

Reply via email to