[Wien] .machines files and pbs system

2011-11-08 Thread Bin Shao
Dear all,

I use PBS system to submit jobs to a cluster with the K-point
parallelization mode. I intend to do a nos-scf calculation (soc) after a
scf calculation. In the scf calculation, the .machines files were created
by the script in the wien2k website. Then, I copy the directory including
the old .machines files of scf to a new directory to do the nonscf
calculation with the same submitting script, just replacing the scf wien2k
command with nonscf one. When I submit the new job, the pbs system shows
that the new job run in the new nodes, but it actually run in the old nodes.

The following is the script to submit jobs:

# setting up local SCRATCH
#setenv SCRATCH /tmp/$PBS_JOBID

# creating .machines
cat $PBS_NODEFILE > .machines_current
set aa=`wc .machines_current`
echo '#' > .machines

# run lapw1/2 using k-point parallel
set i=1
while ($i <= $aa[1])
  echo -n '1:' >> .machines
#  head -$i .machines_current |tail -1 >> .machines
  set nn = `head -$i .machines_current |tail -1`
  echo $nn >> .machines
#  echo -n '1:' >> .machines
#  echo $nn >> .machines
#  rsh $nn mkdir -p $SCRATCH
  @ i++
end
echo 'granularity:1' >>.machines
echo 'extrafine:1' >>.machines

# setup $delay and $sleepy
setenv LAPW_DELAY  1
setenv LAPW_SLEEPY  1

# Wien2k command
runsp_lapw -p -i 100 -ec 0.01 -NI (scf)
-
# Wien2k command
x lapwso -up -p -c  (nonscf)
x lapw2 -up -p -so -c
x lapw2 -dn -p -so -c

So how can I submit the new job (nonscf) to a different nodes?

Thank you in advanced. Any suggestion will be appreciated!

Best regards,

-- 
Bin Shao, Ph.D. Candidate
College of Information Technical Science, Nankai University
94 Weijin Rd. Nankai Dist. Tianjin 300071, China
Email: bshao at mail.nankai.edu.cn
-- next part --
An HTML attachment was scrubbed...
URL: 



[Wien] .machines files and pbs system

2011-11-08 Thread Peter Blaha
The problem is the following:

When doing a scf calculation (run_lapw -p), the
x lapw1 -p step will use the.machines   file
for parallelization (usually created on the fly by the PBS job),
but at the same time it will create automatically a
.processes  file, which will be used for parallelization
of the later steps of an scf (lapwso, lapw2).

This scheme allows to modify the .machines file DURING a running
scf calculation.

Thus what you have to do is modify your PBS script and create instead of
the .machines file the   .processes file.

Just examine an existing file, its content should be clear from that.

Am 08.11.2011 12:07, schrieb Bin Shao:
> Dear all,
>
> I use PBS system to submit jobs to a cluster with the K-point parallelization 
> mode. I intend to do a nos-scf calculation (soc) after a scf calculation. In 
> the scf
> calculation, the .machines files were created by the script in the wien2k 
> website. Then, I copy the directory including the old .machines files of scf 
> to a new directory to
> do the nonscf calculation with the same submitting script, just replacing the 
> scf wien2k command with nonscf one. When I submit the new job, the pbs system 
> shows that the
> new job run in the new nodes, but it actually run in the old nodes.
>
> The following is the script to submit jobs:
>
> # setting up local SCRATCH
> #setenv SCRATCH /tmp/$PBS_JOBID
>
> # creating .machines
> cat $PBS_NODEFILE > .machines_current
> set aa=`wc .machines_current`
> echo '#' > .machines
>
> # run lapw1/2 using k-point parallel
> set i=1
> while ($i <= $aa[1])
>echo -n '1:' >> .machines
> #  head -$i .machines_current |tail -1 >> .machines
>set nn = `head -$i .machines_current |tail -1`
>echo $nn >> .machines
> #  echo -n '1:' >> .machines
> #  echo $nn >> .machines
> #  rsh $nn mkdir -p $SCRATCH
>@ i++
> end
> echo 'granularity:1' >>.machines
> echo 'extrafine:1' >>.machines
>
> # setup $delay and $sleepy
> setenv LAPW_DELAY  1
> setenv LAPW_SLEEPY  1
>
> # Wien2k command
> runsp_lapw -p -i 100 -ec 0.01 -NI (scf)
> -
> # Wien2k command
> x lapwso -up -p -c  (nonscf)
> x lapw2 -up -p -so -c
> x lapw2 -dn -p -so -c
>
> So how can I submit the new job (nonscf) to a different nodes?
>
> Thank you in advanced. Any suggestion will be appreciated!
>
> Best regards,
>
> --
> Bin Shao, Ph.D. Candidate
> College of Information Technical Science, Nankai University
> 94 Weijin Rd. Nankai Dist. Tianjin 300071, China
> Email: bshao at mail.nankai.edu.cn 
>
>
>
> ___
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien

-- 
-
Peter Blaha
Inst. Materials Chemistry, TU Vienna
Getreidemarkt 9, A-1060 Vienna, Austria
Tel: +43-1-5880115671
Fax: +43-1-5880115698
email: pblaha at theochem.tuwien.ac.at
-


[Wien] .machines files and pbs system

2011-11-09 Thread Bin Shao
Dear Prof. Peter Blaha,

Thank your for your reply!

Best,

On Wed, Nov 9, 2011 at 12:23 AM, Peter Blaha
wrote:

> The problem is the following:
>
> When doing a scf calculation (run_lapw -p), the
> x lapw1 -p step will use the.machines   file
> for parallelization (usually created on the fly by the PBS job),
> but at the same time it will create automatically a
> .processes  file, which will be used for parallelization
> of the later steps of an scf (lapwso, lapw2).
>
> This scheme allows to modify the .machines file DURING a running
> scf calculation.
>
> Thus what you have to do is modify your PBS script and create instead of
> the .machines file the   .processes file.
>
> Just examine an existing file, its content should be clear from that.
>
> Am 08.11.2011 12:07, schrieb Bin Shao:
>
>> Dear all,
>>
>> I use PBS system to submit jobs to a cluster with the K-point
>> parallelization mode. I intend to do a nos-scf calculation (soc) after a
>> scf calculation. In the scf
>> calculation, the .machines files were created by the script in the wien2k
>> website. Then, I copy the directory including the old .machines files of
>> scf to a new directory to
>> do the nonscf calculation with the same submitting script, just replacing
>> the scf wien2k command with nonscf one. When I submit the new job, the pbs
>> system shows that the
>> new job run in the new nodes, but it actually run in the old nodes.
>>
>> The following is the script to submit jobs:
>>
>> # setting up local SCRATCH
>> #setenv SCRATCH /tmp/$PBS_JOBID
>>
>> # creating .machines
>> cat $PBS_NODEFILE > .machines_current
>> set aa=`wc .machines_current`
>> echo '#' > .machines
>>
>> # run lapw1/2 using k-point parallel
>> set i=1
>> while ($i <= $aa[1])
>>   echo -n '1:' >> .machines
>> #  head -$i .machines_current |tail -1 >> .machines
>>   set nn = `head -$i .machines_current |tail -1`
>>   echo $nn >> .machines
>> #  echo -n '1:' >> .machines
>> #  echo $nn >> .machines
>> #  rsh $nn mkdir -p $SCRATCH
>>   @ i++
>> end
>> echo 'granularity:1' >>.machines
>> echo 'extrafine:1' >>.machines
>>
>> # setup $delay and $sleepy
>> setenv LAPW_DELAY  1
>> setenv LAPW_SLEEPY  1
>>
>> # Wien2k command
>> runsp_lapw -p -i 100 -ec 0.01 -NI (scf)
>> --**--**
>> -
>> # Wien2k command
>> x lapwso -up -p -c  (nonscf)
>> x lapw2 -up -p -so -c
>> x lapw2 -dn -p -so -c
>>
>> So how can I submit the new job (nonscf) to a different nodes?
>>
>> Thank you in advanced. Any suggestion will be appreciated!
>>
>> Best regards,
>>
>> --
>> Bin Shao, Ph.D. Candidate
>> College of Information Technical Science, Nankai University
>> 94 Weijin Rd. Nankai Dist. Tianjin 300071, China
>> Email: bshao at mail.nankai.edu.cn > mail.nankai.edu.**cn
>> >
>>
>>
>>
>> __**_
>> Wien mailing list
>> Wien at zeus.theochem.tuwien.ac.**at 
>> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien
>>
>
> --
> --**---
> Peter Blaha
> Inst. Materials Chemistry, TU Vienna
> Getreidemarkt 9, A-1060 Vienna, Austria
> Tel: +43-1-5880115671
> Fax: +43-1-5880115698
> email: pblaha at theochem.tuwien.ac.at
> --**---
> __**_
> Wien mailing list
> Wien at zeus.theochem.tuwien.ac.**at 
> http://zeus.theochem.tuwien.**ac.at/mailman/listinfo/wien
>



-- 
Bin Shao, Ph.D. Candidate
College of Information Technical Science, Nankai University
94 Weijin Rd. Nankai Dist. Tianjin 300071, China
Email: bshao at mail.nankai.edu.cn
-- next part --
An HTML attachment was scrubbed...
URL: