Re: [Wien] qtl: error reading parallel vectors

2020-10-24 Thread Peter Blaha
qtlpara is not ready to use parallel vectors from scratch directories on 
different nodes, but so far requires that all vector files are 
accessible directly.


Both,   x lapw2 -p -qtl and also   x qtl -p   run actually in single 
mode, the -p directs them to read the .processes files and to use all 
the parallel vectors (case.vector_1 .._2, ...).


When using a local SCRATCH directory, the vectors are stored there and 
ONLY ACCESSIBLE on the corresponding node. Thus it works if using a 
single node (all parallel vector files are accessible on that node), but 
does not when using 2 or more nodes.


lapw2para can overcome this limitation, since it has a line with 
vec2old_lapw, which uses scp to copy all vector files from the different 
nodes to the local machine:


qtl:
echo "calculating QTL's from parallel vectors"
vec2old_lapw -p -local $so -$updn  # <---
$exe $def.def $maxproc

in qtlpara, this line is missing.

echo "calculating QTL's from parallel vectors"
$exe $def.def $maxproc


Please insert the vec2old_lapw line into qtlpara just before the $exe line.

Am 24.10.2020 um 22:30 schrieb Christian Søndergaard Pedersen:

Hello Gavin


Thanks for your reply, and apologies for my tardiness.


[1] All my calculations are run in MPI-parallel on our HPC cluster. I 
cannot execute any 'x lapw[0,1,2] -p' command in the terminal (on the 
cluster login node); this results in 'pbsssh: command not found'. 
However, submitting via the SLURM workload manager works fine. In all my 
submit scripts, I specify 'setenv SCRATCH /scratch/$USER', which is the 
proper location of scratch storage on our HPC cluster.



[2] Without having tried your example for diamond, I can report that 
'run_lapw -p' followed by 'x qtl -p -telnes' works without problems for 
a single cell of Vanadium dioxide. However, for other systems I get the 
error I specified. The other systems (1) are larger, and (2) use two 
CPU's instead of a single CPU (.machines file are modified suitably).


Checking the qtl.def file for the calculation that _did_ work, I can see 
that the line specifying '/scratch/chrsop/VO2.vectordn' is _also_ 
present here, so this is not to blame. This leaves me baffled as to what 
the error can be - as far as I can tell, I am trying to perform the 
exact same calculation for different systems. I thought maybe 
insufficient scratch storage could be to blame, but this would most 
likely show up in the 'run_lapw' cycles (I believe).



[3] I am posting here the difference between qtlpara and lapw2para:

$ grep "single" $WIENROOT/qtlpara_lapw
     testinput .processes single
     $ grep "single" $WIENROOT/lapw2para_lapw
     testinput .processes single
     single:
     echo "running in single mode"

... if this is wrong, I kindly request advice on how to fix it, so I can 
pass it on to our software maintenance guy. If there's anything else I 
can try please let me know.


Best regards
Christian



*Fra:* Wien  på vegne af Gavin 
Abo 

*Sendt:* 21. oktober 2020 07:02:01
*Til:* wien@zeus.theochem.tuwien.ac.at
*Emne:* Re: [Wien] qtl: error reading parallel vectors

I'm not sure about the physics of the following WIEN2k 19.2 parallel 
calculation (with all patches at [1] applied), but mechanically the "x 
qtl -p -telnes" seems to have run without error.



I typically have SCRATCH in my .bashrc set to "./" but used another 
location "/home/username/wiendata/scratch" as seen below.  Does a simple 
k-point parallel calculation like the one below work on your system?  I 
haven't tried mpi parallel yet.  On the other hand, I have noticed a 
possible issue that if one forgets to setup a .machines file and tries 
to run a parallel calculation that qtlpara_lapw seems to fail switching 
over to the serial calculation mode as shown under [2] below.  If one 
compares for example lapw2para_lapw and qtlpara_lapw, as illustrated by 
[3] below, the qtlpara_lapw may be missing some additional code that 
could be needed to get that to work.



username@computername:~/wiendata/diamond$ grep SCRATCH ~/.bashrc
export SCRATCH=/home/username/wiendata/scratch
username@computername:~/wiendata/diamond$ ls
diamond.struct
username@computername:~/wiendata/diamond$ init_lapw -b
...
   init_lapw finished ok
username@computername:~/wiendata/diamond$ cat .machines
1:localhost
1:localhost
granularity:1
extrafine:1
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11    ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END
STOP  LAPW1 END
STOP LAPW2 - FERMI; weights written
STOP  LAPW2 END
STOP  LAPW2 END
STOP  SUMPARA END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

 >   stop
username@computername:~/wiendata/diamond$ cp 
$WIENROOT/SRC_templates/case.innes diamond.innes

username@computername:~/wiendata/diamond$

Re: [Wien] qtl: error reading parallel vectors

2020-10-24 Thread Laurence Marks
I think you are doing something wrong in your job submission. I suggest
that you talk to your sysadmin, as there are too many ways for your
calculations to have gone wrong. It may take weeks or more of people on the
list guessing.

It should be possible to assign nodes interactively and have them available
in .machines. Your response that the simple commands fail with "pbsssh:
command not found" is very odd. The command "x lapw0 -p" is a very basic
one, and if this fails for multiple cores something is very wrong.

---
Prof Laurence Marks
"Research is to see what everyone else has seen, and to think what nobody
else has thought", Albert Szent-Gyorgi
www.numis.northwestern.edu

On Sat, Oct 24, 2020, 15:30 Christian Søndergaard Pedersen 
wrote:

> Hello Gavin
>
>
> Thanks for your reply, and apologies for my tardiness.
>
>
> [1] All my calculations are run in MPI-parallel on our HPC cluster. I
> cannot execute any 'x lapw[0,1,2] -p' command in the terminal (on the
> cluster login node); this results in 'pbsssh: command not found'. However,
> submitting via the SLURM workload manager works fine. In all my submit
> scripts, I specify 'setenv SCRATCH /scratch/$USER', which is the proper
> location of scratch storage on our HPC cluster.
>
>
> [2] Without having tried your example for diamond, I can report that
> 'run_lapw -p' followed by 'x qtl -p -telnes' works without problems for a
> single cell of Vanadium dioxide. However, for other systems I get the error
> I specified. The other systems (1) are larger, and (2) use two CPU's
> instead of a single CPU (.machines file are modified suitably).
>
> Checking the qtl.def file for the calculation that _did_ work, I can see
> that the line specifying '/scratch/chrsop/VO2.vectordn' is _also_ present
> here, so this is not to blame. This leaves me baffled as to what the error
> can be - as far as I can tell, I am trying to perform the exact same
> calculation for different systems. I thought maybe insufficient scratch
> storage could be to blame, but this would most likely show up in the
> 'run_lapw' cycles (I believe).
>
>
> [3] I am posting here the difference between qtlpara and lapw2para:
>
> $ grep "single" $WIENROOT/qtlpara_lapw
> testinput .processes single
> $ grep "single" $WIENROOT/lapw2para_lapw
> testinput .processes single
> single:
> echo "running in single mode"
>
> ... if this is wrong, I kindly request advice on how to fix it, so I can
> pass it on to our software maintenance guy. If there's anything else I can
> try please let me know.
>
> Best regards
> Christian
>
>
> --
> *Fra:* Wien  på vegne af Gavin
> Abo 
> *Sendt:* 21. oktober 2020 07:02:01
> *Til:* wien@zeus.theochem.tuwien.ac.at
> *Emne:* Re: [Wien] qtl: error reading parallel vectors
>
>
> I'm not sure about the physics of the following WIEN2k 19.2 parallel
> calculation (with all patches at [1] applied), but mechanically the "x qtl
> -p -telnes" seems to have run without error.
>
>
> I typically have SCRATCH in my .bashrc set to "./" but used another
> location "/home/username/wiendata/scratch" as seen below.  Does a simple
> k-point parallel calculation like the one below work on your system?  I
> haven't tried mpi parallel yet.  On the other hand, I have noticed a
> possible issue that if one forgets to setup a .machines file and tries to
> run a parallel calculation that qtlpara_lapw seems to fail switching over
> to the serial calculation mode as shown under [2] below.  If one compares
> for example lapw2para_lapw and qtlpara_lapw, as illustrated by [3] below,
> the qtlpara_lapw may be missing some additional code that could be needed
> to get that to work.
>
>
> username@computername:~/wiendata/diamond$ grep SCRATCH ~/.bashrc
> export SCRATCH=/home/username/wiendata/scratch
> username@computername:~/wiendata/diamond$ ls
> diamond.struct
> username@computername:~/wiendata/diamond$ init_lapw -b
> ...
>   init_lapw finished ok
> username@computername:~/wiendata/diamond$ cat .machines
> 1:localhost
> 1:localhost
> granularity:1
> extrafine:1
> username@computername:~/wiendata/diamond$ run_lapw -p
> ...
> in cycle 11ETEST: .000145755000   CTEST: .0033029
> hup: Command not found.
> STOP  LAPW0 END
> STOP  LAPW1 END
> STOP  LAPW1 END
> STOP LAPW2 - FERMI; weights written
> STOP  LAPW2 END
> STOP  LAPW2 END
> STOP  SUMPARA END
> STOP  CORE  END
> STOP  MIXER END
> ec cc and fc_conv 1 1 1
>
> >   stop
> username@computername:~/wiendata/diamond$ cp
> $WIENROOT/SRC_templates/case.innes diamond.innes
> username@computername:~/wiendata/diamo

Re: [Wien] qtl: error reading parallel vectors

2020-10-24 Thread Christian Søndergaard Pedersen
Hello Gavin


Thanks for your reply, and apologies for my tardiness.


[1] All my calculations are run in MPI-parallel on our HPC cluster. I cannot 
execute any 'x lapw[0,1,2] -p' command in the terminal (on the cluster login 
node); this results in 'pbsssh: command not found'. However, submitting via the 
SLURM workload manager works fine. In all my submit scripts, I specify 'setenv 
SCRATCH /scratch/$USER', which is the proper location of scratch storage on our 
HPC cluster.


[2] Without having tried your example for diamond, I can report that 'run_lapw 
-p' followed by 'x qtl -p -telnes' works without problems for a single cell of 
Vanadium dioxide. However, for other systems I get the error I specified. The 
other systems (1) are larger, and (2) use two CPU's instead of a single CPU 
(.machines file are modified suitably).

Checking the qtl.def file for the calculation that _did_ work, I can see that 
the line specifying '/scratch/chrsop/VO2.vectordn' is _also_ present here, so 
this is not to blame. This leaves me baffled as to what the error can be - as 
far as I can tell, I am trying to perform the exact same calculation for 
different systems. I thought maybe insufficient scratch storage could be to 
blame, but this would most likely show up in the 'run_lapw' cycles (I believe).


[3] I am posting here the difference between qtlpara and lapw2para:

$ grep "single" $WIENROOT/qtlpara_lapw
testinput .processes single
$ grep "single" $WIENROOT/lapw2para_lapw
testinput .processes single
single:
echo "running in single mode"

... if this is wrong, I kindly request advice on how to fix it, so I can pass 
it on to our software maintenance guy. If there's anything else I can try 
please let me know.

Best regards
Christian





Fra: Wien  på vegne af Gavin Abo 

Sendt: 21. oktober 2020 07:02:01
Til: wien@zeus.theochem.tuwien.ac.at
Emne: Re: [Wien] qtl: error reading parallel vectors


I'm not sure about the physics of the following WIEN2k 19.2 parallel 
calculation (with all patches at [1] applied), but mechanically the "x qtl -p 
-telnes" seems to have run without error.


I typically have SCRATCH in my .bashrc set to "./" but used another location 
"/home/username/wiendata/scratch" as seen below.  Does a simple k-point 
parallel calculation like the one below work on your system?  I haven't tried 
mpi parallel yet.  On the other hand, I have noticed a possible issue that if 
one forgets to setup a .machines file and tries to run a parallel calculation 
that qtlpara_lapw seems to fail switching over to the serial calculation mode 
as shown under [2] below.  If one compares for example lapw2para_lapw and 
qtlpara_lapw, as illustrated by [3] below, the qtlpara_lapw may be missing some 
additional code that could be needed to get that to work.


username@computername:~/wiendata/diamond$ grep SCRATCH ~/.bashrc
export SCRATCH=/home/username/wiendata/scratch
username@computername:~/wiendata/diamond$ ls
diamond.struct
username@computername:~/wiendata/diamond$ init_lapw -b
...
  init_lapw finished ok
username@computername:~/wiendata/diamond$ cat .machines
1:localhost
1:localhost
granularity:1
extrafine:1
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END
STOP  LAPW1 END
STOP LAPW2 - FERMI; weights written
STOP  LAPW2 END
STOP  LAPW2 END
STOP  SUMPARA END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

>   stop
username@computername:~/wiendata/diamond$ cp $WIENROOT/SRC_templates/case.innes 
diamond.innes
username@computername:~/wiendata/diamond$ x qtl -p -telnes
running QTL in parallel mode
calculating QTL's from parallel vectors
STOP  QTL END
6.4u 0.1s 0:06.59 100.0% 0+0k 0+8024io 0pf+0w
username@computername:~/wiendata/diamond$ cat diamond.inq
0 2.2000
1
1 99 1 0
4 0 1 2 3
username@computername:~/wiendata/diamond$ x telnes3
STOP TELNES3 DONE
3.3u 0.0s 0:03.39 99.7% 0+0k 0+96io 0pf+0w


[1] https://github.com/gsabo/WIEN2k-Patches/tree/master/19.2

[https://avatars0.githubusercontent.com/u/6389916?s=400=4]<https://github.com/gsabo/WIEN2k-Patches/tree/master/19.2>

WIEN2k-Patches/19.2 at master · gsabo/WIEN2k-Patches · 
GitHub<https://github.com/gsabo/WIEN2k-Patches/tree/master/19.2>
github.com
Contribute to gsabo/WIEN2k-Patches development by creating an account on GitHub.




[2] Error when qtlpara_lapw tries to switch to single mode during "x qtl -p 
-telnes":


username@computername:~/wiendata/diamond$ cat .machine
cat: .machine: No such file or directory
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END
STOP  LAPW2 END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

>   stop
username@comput

Re: [Wien] qtl: error reading parallel vectors

2020-10-24 Thread Gavin Abo
Regarding [1], I did expect that you would have to submit the commands 
within your job script via the SLURM workload manager on your system 
with something like [5,6]



 sbatch my_job_script.job


 or by whatever method you have to use on your system. Where, the 
commands at [7] are in the job file, such as:



 my_job_script.job

 -

 #!/bin/bash

 # ...

 run_lapw -p
 x -qtl -p -telnes
 x telnes3

 -


    In my case, I don't have SLURM.  So I'm unable to do any testing in 
that environment.  Maybe someone else in the mailing list has a SLURM 
system that check if they are encountering the same problem that you are 
having.



    [5] 
https://www.hpc2n.umu.se/documentation/batchsystem/basic-submit-example-scripts


    [6] https://doku.lrz.de/display/PUBLIC/WIEN2k

    [7] 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg20597.html



Regarding [2], good to read that mpi parallel with "x -qtl -p -telnes" 
works fine on your system with Vanadium Dioxide (VO2). If you have 
control of what nodes the calculation will run on, does the VO2 run fine 
on your 1st node (e.g., x073 [8]) with multiple cores of a single CPU, 
then does it run fine on the 2nd node (e.g., x082) with multiple cores 
of a single CPU?  I have read at [9] that some schedule managers 
automatically assign the nodes on the fly such that the user might have 
no control in some case on which nodes the job will run on.  Does the 
VO2 run fine with mpi parallel using 1 processor core on node 1 and 1 
processor core on node 2, if your able to control that as it may help to 
narrow down the problem?



    [8] 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg20617.html


    [9] http://susi.theochem.tuwien.ac.at/reg_user/faq/pbs.html


Regarding [3], the output you posted looks as expected.  So nothing 
wrong with that.



    In the past, I posted in the mailing list some things that I found 
helpful for troubleshooting parallel issues, but you would have to 
search the mailing list to find them.  I believe a couple of them may 
have been at the following two links:



  [10] 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg17973.html


  [11] 
http://zeus.theochem.tuwien.ac.at/pipermail/wien/2018-April/027944.html



Lastly, I have now tried a WIEN2k 19.2 calculation using mpi parallel on 
my system with the struct file at 
https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg20645.html .



It looks like it ran fine when it was set to run on two of the four 
processors on my system:



username@computername:~/wiendata/diamond$ ls ~/wiendata/scratch
username@computername:~/wiendata/diamond$ ls
diamond.struct
username@computername:~/wiendata/diamond$ init_lapw -b
...
username@computername:~/wiendata/diamond$ cat $WIENROOT/parallel_options
setenv TASKSET "no"
if ( ! $?USE_REMOTE ) setenv USE_REMOTE 1
if ( ! $?MPI_REMOTE ) setenv MPI_REMOTE 1
setenv WIEN_GRANULARITY 1
setenv DELAY 0.1
setenv SLEEPY 1
username@computername:~/wiendata/diamond$ cat .machines
1:localhost:2
granularity:1
extrafine:1
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11    ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END

real    0m6.744s
user    0m12.679s
sys    0m0.511s
STOP LAPW2 - FERMI; weights written
STOP  LAPW2 END

real    0m1.123s
user    0m1.785s
sys    0m0.190s
STOP  SUMPARA END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

>   stop
username@computername:~/wiendata/diamond$ cp 
$WIENROOT/SRC_templates/case.innes diamond.innes

username@computername:~/wiendata/diamond$ x qtl -p -telnes
running QTL in parallel mode
calculating QTL's from parallel vectors
STOP  QTL END
6.5u 0.0s 0:06.77 98.3% 0+0k 928+8080io 4pf+0w
username@computername:~/wiendata/diamond$ cat diamond.inq
0 2.2000
1
1 99 1 0
4 0 1 2 3
username@computername:~/wiendata/diamond$ x telnes3
STOP TELNES3 DONE
3.2u 0.0s 0:03.39 98.8% 0+0k 984+96io 3pf+0w
username@computername:~/wiendata/diamond$ ls -l ~/wiendata/scratch
total 624
-rw-rw-r-- 1 username username  0 Oct 24 15:40 diamond.vector
-rw-rw-r-- 1 username username 637094 Oct 24 15:43 diamond.vector_1
-rw-rw-r-- 1 username username  0 Oct 24 15:44 diamond.vectordn
-rw-rw-r-- 1 username username  0 Oct 24 15:44 diamond.vectordn_1


On 10/24/2020 2:30 PM, Christian Søndergaard Pedersen wrote:


Hello Gavin


Thanks for your reply, and apologies for my tardiness.


[1] All my calculations are run in MPI-parallel on our HPC cluster. I 
cannot execute any 'x lapw[0,1,2] -p' command in the terminal (on the 
cluster login node); this results in 'pbsssh: command not found'. 
However, submitting via the SLURM workload manager works fine. In all 
my submit scripts, I specify 'setenv SCRATCH /scratch/$USER', which is 
the proper location of scratch storage on our HPC cluster.




Re: [Wien] qtl: error reading parallel vectors

2020-10-20 Thread Gavin Abo
I'm not sure about the physics of the following WIEN2k 19.2 parallel 
calculation (with all patches at [1] applied), but mechanically the "x 
qtl -p -telnes" seems to have run without error.



I typically have SCRATCH in my .bashrc set to "./" but used another 
location "/home/username/wiendata/scratch" as seen below. Does a simple 
k-point parallel calculation like the one below work on your system?  I 
haven't tried mpi parallel yet.  On the other hand, I have noticed a 
possible issue that if one forgets to setup a .machines file and tries 
to run a parallel calculation that qtlpara_lapw seems to fail switching 
over to the serial calculation mode as shown under [2] below.  If one 
compares for example lapw2para_lapw and qtlpara_lapw, as illustrated by 
[3] below, the qtlpara_lapw may be missing some additional code that 
could be needed to get that to work.



username@computername:~/wiendata/diamond$ grep SCRATCH ~/.bashrc
export SCRATCH=/home/username/wiendata/scratch
username@computername:~/wiendata/diamond$ ls
diamond.struct
username@computername:~/wiendata/diamond$ init_lapw -b
...
  init_lapw finished ok
username@computername:~/wiendata/diamond$ cat .machines
1:localhost
1:localhost
granularity:1
extrafine:1
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11    ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END
STOP  LAPW1 END
STOP LAPW2 - FERMI; weights written
STOP  LAPW2 END
STOP  LAPW2 END
STOP  SUMPARA END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

>   stop
username@computername:~/wiendata/diamond$ cp 
$WIENROOT/SRC_templates/case.innes diamond.innes

username@computername:~/wiendata/diamond$ x qtl -p -telnes
running QTL in parallel mode
calculating QTL's from parallel vectors
STOP  QTL END
6.4u 0.1s 0:06.59 100.0% 0+0k 0+8024io 0pf+0w
username@computername:~/wiendata/diamond$ cat diamond.inq
0 2.2000
1
1 99 1 0
4 0 1 2 3
username@computername:~/wiendata/diamond$ x telnes3
STOP TELNES3 DONE
3.3u 0.0s 0:03.39 99.7% 0+0k 0+96io 0pf+0w


[1] https://github.com/gsabo/WIEN2k-Patches/tree/master/19.2


[2] Error when qtlpara_lapw tries to switch to single mode during "x qtl 
-p -telnes":



username@computername:~/wiendata/diamond$ cat .machine
cat: .machine: No such file or directory
username@computername:~/wiendata/diamond$ run_lapw -p
...
in cycle 11    ETEST: .000145755000   CTEST: .0033029
hup: Command not found.
STOP  LAPW0 END
STOP  LAPW1 END
STOP  LAPW2 END
STOP  CORE  END
STOP  MIXER END
ec cc and fc_conv 1 1 1

>   stop
username@computername:~/wiendata/diamond$ cp 
$WIENROOT/SRC_templates/case.innes diamond.innes

username@computername:~/wiendata/diamond$ x qtl -p -telnes
single: label not found.
0.0u 0.0s 0:00.01 0.0% 0+0k 0+0io 0pf+0w
error: command   /home/username/WIEN2k/qtlpara qtl.def   failed


[3] Grep difference between qtlpara_lapw and lapw2para_lapw:


username@computername:~/wiendata/diamond$ grep "single" 
$WIENROOT/qtlpara_lapw

testinput .processes single
username@computername:~/wiendata/diamond$ grep "single" 
$WIENROOT/lapw2para_lapw

testinput .processes single
single:
echo "running in single mode"


On 10/20/2020 12:24 PM, Christian Søndergaard Pedersen wrote:


Greetings


I am trying to run qtl in order to calculate the partial charge 
densities for the telnes3 program. The following fails, generating the 
error in the subject line:



    run_lapw -p

    x qtl -p -telnes


Meanwhile, the following works:


    run_lapw -p

    x lapw2 -p -qtl


However, lapw2 does not generate the terms necessary for telnes3, 
which subsequently fails with error message "isplit needs to be 99".


From reading this thread:

https://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/msg05792.html 




... I understand that it may be related to $SCRATCH. When I look 
inside lapw1.def and lapw2.def, I find:



10,'/scratch/chrsop/case.vector', 'unknown','unformatted',9000


... while qtl.def contains the following:


 9,'/scratch/chrsop/case.vector', 'unknown','unformatted',9000
10,'/scratch/chrsop/case.vectordn', 'unknown','unformatted',9000


Note that lapw1 and lapw2 run smoothly. Presumably, qtl goes looking 
for case.vectordn, which is not generated during run_lapw. I tried 
deleting the offending line in qtl.def and running:



    x lapw1 -p

    x qtl -p -telnes


... but this failed for the same reason as before, and when the job 
finished, qtl.def once again had line 10 as shown above. My case.inq 
file looks like:



0 8.73846153846153846153
1
1 99 1 0
4 0 1 2 3

Any help in solving this matter would be greatly appreciated.


Best regards

Christian

diamond
F   LATTICE,NONEQUIV.ATOMS:  1 227_Fd-3m   
MODE OF CALC=RELA unit=ang