I guess I also have to add some clarifications, because there is no .process or .process2 file, ....

For parallel calculations the main file of interest is

.machines

And this file is used by lapw0para and lapw1para as input.

However, lapw1para produces

.processes

(because you may have more parallel jobs then specified in .machines (because of "non-integer k-point/core ratio) and in particular because we allow explicitly to modify .machines during run_lapw (changing the parallelization by hand for the next iterations in case a machine is overloaded on a cluster).

The following parallel scripts (lapwso, lapw2) therefore read from .processes (and not from .machines). (Also vec2old and vec2pratt need .processes and this may cause eventually problems, although if you change the number of cores ot use a non-local $SCRATCH, it cannot do -it at the very beginning anyway).

This is (at least to some extend) documented in the UG (search eg. for processes).

These are usually the only 2 files of interest for a user.
There are no .process or .process2 files; and other ".machineXX or .processesXX files are internal only.

I'm sometimes using PBS-scripts which do generate .processes, but the easiest thing would be to add a flag to lapw1para, so that it does not execute lapw1, but just generates the .processes file.



On 06/04/2013 02:42 PM, Laurence Marks wrote:
Let me add a couple of clarifications to what Peter suggested, since I
often get caught by the ".process" and ".process2" files. I just did a
quick search of the UG and they do not appear to be described
(suggestion to Peter to add). Their format is somewhat self
explanatory (although they do not seem to be constructed to be very
human readable, i.e. no comments on the right.)

The shell scripts create these files during a -p run so, for instance,
lapw2 can run in parallel on the same nodes that lapw1 used and
therefore SCRATCH and other files which might only be in temporary
storage are consistent. When you run under PBS or several other
similar job control codes which nodes you use are dynamically
allocated and are then stored within these files. If you now run a
second job or interactive task to do some analysis you are likely to
have a different set of nodes allocated to you which can then create
problems.

If you are running interactively you can adjust these files for the
nodes that you have. Alternatively you can do a single -p pass
although unfortunately -it might switch to non-iterative because the
setup uses the .process file to cp the old vector file.

I suspect that it may be too hard to construct a shell script (or csh)
to set these up properly for a given .machines file by extracting the
relevant section from lapw1para, perhaps a good mini project for
someone.

On Tue, Jun 4, 2013 at 7:04 AM, Peter Blaha
<pbl...@theochem.tuwien.ac.at> wrote:
-it does not have an effect for lapw2 since there is no diagonalization.

The problem is most likely that you need a proper .processes file to run
      x lapw2 -p. Most PBS-scripts create only   .machines and run_lapw
will then generate later on .processes.

My suggestion:  First check if you have the vector files in "case":

cd /home/my_username/wien2k/case
ls -alsrt *vector*

If they look ok (size and date !!!)

use   join_vectorfiles

to combine them to a single "non-parallel" vector and then
submit   x lapw2 -qtl   (without -p)

Note: in "QTL-mode", lapw2 always runs only on a single core and the -p
option is only used to let the code know about the parallel vector files.


On 06/04/2013 12:06 PM, Yundi Quan wrote:
Thank Stefaan and Michael for your prompt replies.

My $SCRATCH file is set to ./ in the .bashrc file. I forgot to mention
that I used x lapw2 -qtl -p -it rather than x lapw2 -qtl -p. The
option -it tells WIEN2k to use iterative diagonalization, which should
not be a problem. In uplapw2_1.def, it says that
10,'./case.vectorup_1', 'unknown','unformatted',9000

I think this line tells lapw2 where to find the vector. But again, ./
seems to mean my home directory rather than my working directory.

I tried making a new scratch directory and set the $SCRATCH to that
directory as suggested by Michael. I hope it works.



Yundi



On Tue, Jun 4, 2013 at 2:34 AM, Yundi Quan <yq...@ucdavis.edu> wrote:


---------- Forwarded message ----------
From: Yundi Quan <quanyu...@gmail.com>
Date: Tue, Jun 4, 2013 at 2:19 AM
Subject: [Wien] scp error
To: A Mailing list for WIEN2k users <wien@zeus.theochem.tuwien.ac.at>


Hi,
I'm working on a cluster 8 quadcore nodes. Nodes are called n001,
n002, n003, n004... I use torque PBS queue system. When I submit a job
using a bash file, the default directory is always my home directory.
Therefore, at the beginning of my bash file, I always add a line 'cd
  '. This works for scf calculations.
However, when I use x lapw2 -qtl -band -p to calculate the band
structure or x lapw2 -qtl -p to calculate the DOS, I always encounter
the following error message:
scp: .//case.vector_1 not found
scp: .//case.vector_2 not found
...

It seems scp is looking for case.vector_1 in my home directory rather
than my working directory, even though I've added the line 'cd
/home/my_username/case' in my bash file. This problem only occurs with
using x lapw2 -qtl -p. I used to do scf calculation in serial and then
use x lapw2 -qtl. But I'm wondering whether there is a workaround.


Thanks.
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


--

                                        P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300             FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.at    WWW:
http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html




--

                                      P.Blaha
--------------------------------------------------------------------------
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300             FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.at WWW: http://info.tuwien.ac.at/theochem/
--------------------------------------------------------------------------
_______________________________________________
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html

Reply via email to