Re: [Wien] Errori in mini

2014-04-04 Thread Peter Blaha
The message does not say anything about the reason. You have to search 
for other messages with more information.


My suggestion:  MSR1a is MUCH more stable for an inexperienced user then 
min_lapw, which has problems when its "history" contains a "bad point".


I suggest:

cp case.inm case.inm_msr1a
edit case.inm_msr1a and put MSR1a (instead of MSR1)

in your optimize.job script:

comment out the linemin_lapw .
instead, activate again   runsp -orb -fc 1 -cc 0.001
and insert the following line just BEFORE the runsp line:
cp case.inm_msr1a case.inm


PS: from your dayfile I can see:
>   lapw1  -dn   -orb  -c (20:18:44)
> 1603.746u 24.052s 4:26.95 609.7%0+0k 0+382600io 0pf+0w

the 600% tells me that you are using   OMP_NUM_THREADS=6
which is completely useless. Set it to 2 and do a
k-parallel run (study the UG about parallelization) and test it on a 
small testcase.

I suppose you installed wien2k using "shared memory" machine ?
Then you basically need to create .machines file with 3 lines:
1:localhost
1:localhost
1:localhost

(and together with OMP_NUM_THREADS=2 you will use all 6 cores and your 
time will go down from 4:30 to 1:30 for an lapw1 step.




On 04/04/2014 05:55 AM, shamik chakrabarti wrote:

Dear Prof. Blaha Sir,

Sorry to bother you again. But our calculation again
get stopped after 4th iteration with the following message.

   stop error

error: command   /usr/local/WIEN2k/mini mini.def   failed
 >   mini(20:24:31) 0.004u 0.003s 0:00.03 0.0%0+0k 2472+48io 11pf+0w
 >   stop

:CHARGE convergence:  0 0. .0002465
:ENERGY convergence:  0 0 .07395000
 >   mixer  -orb(20:24:29) 7.820u 0.390s 0:01.65 497.5%0+0k 168+40568io
1pf+0w
 >   lcore -dn(20:24:29) 0.046u 0.003s 0:00.05 80.0%0+0k 0+656io 0pf+0w
 >   lcore -up(20:24:29) 0.045u 0.006s 0:00.05 80.0%0+0k 0+656io 0pf+0w
 >   lapwdm -dn  -c (20:24:27) 8.644u 0.378s 0:01.64 549.3%0+0k 0+48io
0pf+0w
 >   lapwdm -up  -c (20:24:25) 9.470u 0.403s 0:01.64 601.8%0+0k 0+48io
0pf+0w
 >   lapw2 -dn-c  -orb (20:23:35) 282.734u 8.097s 0:49.73 584.7%0+0k
0+9816io 0pf+0w
 >   lapw2 -up-c  -orb (20:22:53) 226.301u 6.095s 0:42.32 549.1%0+0k
0+9824io 0pf+0w
1524.395u 21.656s 4:09.36 620.0%0+0k 0+385000io 0pf+0w
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
 >   lapw1  -dn   -orb  -c (20:18:44)  _nb in zhcgst.F 640
   128
1603.746u 24.052s 4:26.95 609.7%0+0k 0+382600io 0pf+0w
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
  _nb in zhcgst.F 640 128
 >   lapw1  -up   -orb  -c (20:14:17)  _nb in zhcgst.F 640
   128
 >   orb -dn (20:14:17) 0.002u 0.002s 0:00.00 0.0%0+0k 0+32io 0pf+0w
 >   orb -up (20:14:17) 0.002u 0.000s 0:00.00 0.0%0+0k 0+32io 0pf+0w
:FORCE convergence: 1 1.0 .010 XCO .009 XCO .037 XCO .013 XCO .031 XCO
.014 XCO
 >   lapw0 (20:14:08) 8.316u 0.047s 0:08.36 99.8%0+0k 0+16424io 0pf+0w

 cycle 4 (Thu Apr  3 20:14:08 IST 2014) (37/96 to go)

*We have checked case.scf

Re: [Wien] leaking core charge and ‘.lcore’

2014-04-04 Thread Peter Blaha



I am working on a structure containing Osmium.  In init, I got a warning
that Os 5s core charge (about 1%) leaked out of the sphere (RMT=2.08 set
by setrmt).


I don't know the energy of Os 5s. If it is just a little below -6. Ry, 
just lower E-core, if it is at lower energies, .lcore should be fine.

The solution with   .lcore should be perfectly ok in your case.
.lcore directs the code to use the spherical core-density (also beyond RMT)
and do a superposition of these densities using dstart (see the dayfile).

.lcore would NOT be ok if Os-O distances become so small, that the Os-5s 
band gets a significant band-width due to the interaction with O.


RMTs of 2.5 and 1.3 violate the RMT criteria and together with a large 
RKmax ?) you get "numerical linear dependencies and ghostbands). Usually 
it is save to change the suggestions of setrmt by a small amount (eg. 
2.18 for Os), but not that much.




I increased the Os RMT to 2.5 manually, decreasing the RMT of its O
neighbor to 1.3.  Then there were no warnings in lstart, but the
calculation crashed after a couple of iterations with QTL-B / “semicore
band-ranges too large” errors.  (The errors were about Os.)

So I tried instead to run the calculation with the original RMTs and the
‘.lcore’ file created by init.  This way, the SCF converged without any
errors or warnings; ‘lcore’ and ‘dstart -lcore’ were executed at every
step.  The band structure looks reasonable as well.

What does this lcore stuff imply for my calculation?  Should I consider
the results suspect?


Thanks,

 Elias



--

  P.Blaha
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWWW: 
http://info.tuwien.ac.at/theochem/

--
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Error in job submission

2014-04-04 Thread Peter Blaha

it says:  lapw1 not found   and   fixerror_lapw   not found, but it
can run lapw0.
This indicates that the installation of WIEN2k is basically ok, but the 
parallelization does not work. See:


Warning: Permanently added 'c559-803,129.114.91.5' (RSA) to the list of 
known hosts.


I don't know how your .machines file looks like (k-parallel or 
mpi-parallel ?), and wee need to know how you configured the "parallel 
calculation"  during siteconfig. (ssh/rsh, did you setup passwordless 
login?; cat $WIENROOT/parallel_options;


It seems, you cannot connect to another node ...
or when you can connect, you do not have your environment on that node.
(Consult your sysadmin).



On 04/03/2014 06:02 PM, Francisco Garcia wrote:


Dear users,

I am trying to run WIEN2k in a bash environment. My job script is shown
below.

#!/bin/bash -f
#SBATCH -J test1
#SBATCH -o test1.o%j
#SBATCH -N 2
#SBATCH -n 16
#SBATCH -p normal
#SBATCH -t 2:00:00

rm -fr .machines

scontrol show hostnames "$SLURM_JOB_NODELIST" | sort -u > machh
sed -e '1,$s/^/1:/' machh > .machines
echo 'granularity:1' >>.machines
echo 'extrafine:1' >>.machines
mkdir /scratch/"$SLURM_JOB_NAME"."$SLURM_JOB_ID"
export dir=/scratch/"$SLURM_JOB_NAME"."$SLURM_JOB_ID"
export SCRATCH=/scratch/"$SLURM_JOB_NAME"."$SLURM_JOB_ID"
runsp_lapw -ec 0.1 -cc 0.0001 -i 100 -p
rm -rf $dir



However I always end up with the error below. I tried changing
environment from bash to csh upon login but the problem still persists.
The .machines file looks fine.


LAPW0 END
bash: lapw1: command not found
bash: fixerror_lapw: command not found
Warning: Permanently added 'c559-803,129.114.91.5' (RSA) to the list of
known hosts.^M
bash: lapw1: command not found
bash: fixerror_lapw: command not found
bulk.scf1up_1: No such file or directory.
Illegal division by zero at /work/WIEN2k_12/bashtime2csh.pl_lapw line 42.
bash: lapw1: command not found
bash: fixerror_lapw: command not found
bulk.scf1dn_1: No such file or directory.
FERMI - Error
cp: cannot stat `.in.tmp': No such file or directory

 >   stop error


Thank you.


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html



--

  P.Blaha
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWWW: 
http://info.tuwien.ac.at/theochem/

--
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] leaking core charge and ‘.lcore’

2014-04-04 Thread Elias Assmann

Dear Peter,

On 04/04/2014 09:30 AM, Peter Blaha wrote:

I don't know the energy of Os 5s. If it is just a little below -6. Ry,
just lower E-core, if it is at lower energies, .lcore should be fine.


From ‘outputst’ (Os):

  4D -19.549263-19.549263  3.00  3.001.  T
  5S  -6.687517 -6.687517  1.00  1.000.9941  T
  5P* -4.363745 -4.363745  1.00  1.000.9824  F

For comparison, there is also a Barium atom (RMT=2.5) in the cell which has:

  4P -12.947176-12.947176  2.00  2.001.  T
  4D* -6.697509 -6.697509  2.00  2.000.9998  T
  4D  -6.505422 -6.505422  3.00  3.000.9998  T
  5S  -2.471967 -2.471967  1.00  1.000.9295  F

So if I do this by changing the separation energy, at least the Ba-4D 
state would go into valence as well.  Would that be a problem?



The solution with   .lcore should be perfectly ok in your case.
.lcore directs the code to use the spherical core-density (also beyond RMT)
and do a superposition of these densities using dstart (see the dayfile).

.lcore would NOT be ok if Os-O distances become so small, that the Os-5s
band gets a significant band-width due to the interaction with O.


Okay, good to know.


Many thanks,

Elias


--
Elias Assmann
Institute of Solid State Physics
Vienna University of Technology

___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Errori in mini

2014-04-04 Thread shamik chakrabarti
Dear Prof. Blaha Sir,

Thank you for your response. We have started the calculation by
following your suggestions. But as we are using MSR1a instead of
min_lapw...it may now take more time than earlier.

We have actually 16 cpus in our system & I have set OMP_NUM_THREADS=8 & for
a single calculation we are getting up to 800% usage. However, if we start
another calculation it then takes rest of the cpus and we can get all the
cpus running with two simultaneous calculation divided in 16 cpus with 800%
for each calculation. I have changed  OMP_NUM_THREADS value starting from 8
to 16 with gradual increase, but the cpu usage remain same for all the
cases.

Yes, we should look for k-parallel run for optimum use of our
resourcesbut at the moment I don't have any expertise over that kind of
installation. We will try to installed k-parallelization with the help of
your suggestions and surely will try to improve calculation speed.

Thank you Sirthank your for all your suggestions.

with regards,




On Fri, Apr 4, 2014 at 12:48 PM, Peter Blaha
wrote:

> The message does not say anything about the reason. You have to search for
> other messages with more information.
>
> My suggestion:  MSR1a is MUCH more stable for an inexperienced user then
> min_lapw, which has problems when its "history" contains a "bad point".
>
> I suggest:
>
> cp case.inm case.inm_msr1a
> edit case.inm_msr1a and put MSR1a (instead of MSR1)
>
> in your optimize.job script:
>
> comment out the linemin_lapw .
> instead, activate again   runsp -orb -fc 1 -cc 0.001
> and insert the following line just BEFORE the runsp line:
> cp case.inm_msr1a case.inm
>
>
> PS: from your dayfile I can see:
>
> >   lapw1  -dn   -orb  -c (20:18:44)
> > 1603.746u 24.052s 4:26.95 609.7%0+0k 0+382600io 0pf+0w
>
> the 600% tells me that you are using   OMP_NUM_THREADS=6
> which is completely useless. Set it to 2 and do a
> k-parallel run (study the UG about parallelization) and test it on a small
> testcase.
> I suppose you installed wien2k using "shared memory" machine ?
> Then you basically need to create .machines file with 3 lines:
> 1:localhost
> 1:localhost
> 1:localhost
>
> (and together with OMP_NUM_THREADS=2 you will use all 6 cores and your
> time will go down from 4:30 to 1:30 for an lapw1 step.
>
>
>
>
> On 04/04/2014 05:55 AM, shamik chakrabarti wrote:
>
>> Dear Prof. Blaha Sir,
>>
>> Sorry to bother you again. But our calculation again
>> get stopped after 4th iteration with the following message.
>>
>>stop error
>>
>> error: command   /usr/local/WIEN2k/mini mini.def   failed
>>  >   mini(20:24:31) 0.004u 0.003s 0:00.03 0.0%0+0k 2472+48io 11pf+0w
>>
>>  >   stop
>>
>> :CHARGE convergence:  0 0. .0002465
>> :ENERGY convergence:  0 0 .07395000
>>  >   mixer  -orb(20:24:29) 7.820u 0.390s 0:01.65 497.5%0+0k 168+40568io
>> 1pf+0w
>>  >   lcore -dn(20:24:29) 0.046u 0.003s 0:00.05 80.0%0+0k 0+656io 0pf+0w
>>  >   lcore -up(20:24:29) 0.045u 0.006s 0:00.05 80.0%0+0k 0+656io 0pf+0w
>>  >   lapwdm -dn  -c (20:24:27) 8.644u 0.378s 0:01.64 549.3%0+0k 0+48io
>> 0pf+0w
>>  >   lapwdm -up  -c (20:24:25) 9.470u 0.403s 0:01.64 601.8%0+0k 0+48io
>>
>> 0pf+0w
>>  >   lapw2 -dn-c  -orb (20:23:35) 282.734u 8.097s 0:49.73 584.7%0+0k
>> 0+9816io 0pf+0w
>>  >   lapw2 -up-c  -orb (20:22:53) 226.301u 6.095s 0:42.32 549.1%0+0k
>> 0+9824io 0pf+0w
>> 1524.395u 21.656s 4:09.36 620.0%0+0k 0+385000io 0pf+0w
>>
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>  >   lapw1  -dn   -orb  -c (20:18:44)  _nb in zhcgst.F 640
>>128
>> 1603.746u 24.052s 4:26.95 609.7%0+0k 0+382600io 0pf+0w
>>
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   _nb in zhcgst.F 640 128
>>   

Re: [Wien] leaking core charge and ‘.lcore’

2014-04-04 Thread Peter Blaha
Aha. So it was the Ba, which produced ghostbands when making the O 
sphere very small.


One could "separate" the Ba and Os by an "charge-criterium (0.995), 
which would put only Os-s as semicore.


Probably I would not do it and go with the .core solution. For almost 
all properties the differences will be irrelevant.


On 04/04/2014 10:13 AM, Elias Assmann wrote:

Dear Peter,

On 04/04/2014 09:30 AM, Peter Blaha wrote:

I don't know the energy of Os 5s. If it is just a little below -6. Ry,
just lower E-core, if it is at lower energies, .lcore should be fine.


 From ‘outputst’ (Os):

   4D -19.549263-19.549263  3.00  3.001.  T
   5S  -6.687517 -6.687517  1.00  1.000.9941  T
   5P* -4.363745 -4.363745  1.00  1.000.9824  F

For comparison, there is also a Barium atom (RMT=2.5) in the cell which
has:

   4P -12.947176-12.947176  2.00  2.001.  T
   4D* -6.697509 -6.697509  2.00  2.000.9998  T
   4D  -6.505422 -6.505422  3.00  3.000.9998  T
   5S  -2.471967 -2.471967  1.00  1.000.9295  F

So if I do this by changing the separation energy, at least the Ba-4D
state would go into valence as well.  Would that be a problem?


The solution with   .lcore should be perfectly ok in your case.
.lcore directs the code to use the spherical core-density (also beyond
RMT)
and do a superposition of these densities using dstart (see the dayfile).

.lcore would NOT be ok if Os-O distances become so small, that the Os-5s
band gets a significant band-width due to the interaction with O.


Okay, good to know.


Many thanks,

 Elias




--

  P.Blaha
--
Peter BLAHA, Inst.f. Materials Chemistry, TU Vienna, A-1060 Vienna
Phone: +43-1-58801-165300 FAX: +43-1-58801-165982
Email: bl...@theochem.tuwien.ac.atWWW: 
http://info.tuwien.ac.at/theochem/

--
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Errori in mini

2014-04-04 Thread Peter Blaha
Of course it is using 600 or 800% of the cpu, BUT it does not run faster 
at all !!!


Compare the timings in case.dayfile. You really can see at what time a 
step starts and how long it really took.


On 04/04/2014 11:23 AM, shamik chakrabarti wrote:

Dear Prof. Blaha Sir,

 Thank you for your response. We have started the calculation by
following your suggestions. But as we are using MSR1a instead of
min_lapw...it may now take more time than earlier.

We have actually 16 cpus in our system & I have set OMP_NUM_THREADS=8 &
for a single calculation we are getting up to 800% usage. However, if we
start another calculation it then takes rest of the cpus and we can get
all the cpus running with two simultaneous calculation divided in 16
cpus with 800% for each calculation. I have changed  OMP_NUM_THREADS
value starting from 8 to 16 with gradual increase, but the cpu usage
remain same for all the cases.

Yes, we should look for k-parallel run for optimum use of our
resourcesbut at the moment I don't have any expertise over that kind
of installation. We will try to installed k-parallelization with the
help of your suggestions and surely will try to improve calculation speed.

Thank you Sirthank your for all your suggestions.

with regards,



On Fri, Apr 4, 2014 at 12:48 PM, Peter Blaha
mailto:pbl...@theochem.tuwien.ac.at>> wrote:

The message does not say anything about the reason. You have to
search for other messages with more information.

My suggestion:  MSR1a is MUCH more stable for an inexperienced user
then min_lapw, which has problems when its "history" contains a "bad
point".

I suggest:

cp case.inm case.inm_msr1a
edit case.inm_msr1a and put MSR1a (instead of MSR1)

in your optimize.job script:

comment out the linemin_lapw .
instead, activate again   runsp -orb -fc 1 -cc 0.001
and insert the following line just BEFORE the runsp line:
cp case.inm_msr1a case.inm


PS: from your dayfile I can see:

 >   lapw1  -dn   -orb  -c (20:18:44)
 > 1603.746u 24.052s 4:26.95 609.7%0+0k 0+382600io 0pf+0w

the 600% tells me that you are using   OMP_NUM_THREADS=6
which is completely useless. Set it to 2 and do a
k-parallel run (study the UG about parallelization) and test it on a
small testcase.
I suppose you installed wien2k using "shared memory" machine ?
Then you basically need to create .machines file with 3 lines:
1:localhost
1:localhost
1:localhost

(and together with OMP_NUM_THREADS=2 you will use all 6 cores and
your time will go down from 4:30 to 1:30 for an lapw1 step.




On 04/04/2014 05:55 AM, shamik chakrabarti wrote:

Dear Prof. Blaha Sir,

 Sorry to bother you again. But our
calculation again
get stopped after 4th iteration with the following message.

stop error

error: command   /usr/local/WIEN2k/mini mini.def   failed
  >   mini(20:24:31) 0.004u 0.003s 0:00.03 0.0%0+0k 2472+48io
11pf+0w

  >   stop

:CHARGE convergence:  0 0. .0002465
:ENERGY convergence:  0 0 .07395000
  >   mixer  -orb(20:24:29) 7.820u 0.390s 0:01.65 497.5%0+0k
168+40568io
1pf+0w
  >   lcore -dn(20:24:29) 0.046u 0.003s 0:00.05 80.0%0+0k
0+656io 0pf+0w
  >   lcore -up(20:24:29) 0.045u 0.006s 0:00.05 80.0%0+0k
0+656io 0pf+0w
  >   lapwdm -dn  -c (20:24:27) 8.644u 0.378s 0:01.64 549.3%0+0k
0+48io
0pf+0w
  >   lapwdm -up  -c (20:24:25) 9.470u 0.403s 0:01.64 601.8%0+0k
0+48io

0pf+0w
  >   lapw2 -dn-c  -orb (20:23:35) 282.734u 8.097s 0:49.73
584.7%0+0k
0+9816io 0pf+0w
  >   lapw2 -up-c  -orb (20:22:53) 226.301u 6.095s 0:42.32
549.1%0+0k
0+9824io 0pf+0w
1524.395u 21.656s 4:09.36 620.0%0+0k 0+385000io 0pf+0w

   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
   _nb in zhcgst.F 640 128
  

Re: [Wien] Errori in mini

2014-04-04 Thread shamik chakrabarti
Dear Prof. Blaha Sir,

To continue from your last suggestions we have noticed the timing of a
single iteration. It is 10 minutes for 16 atoms/unit cell calculation.
Sir, do you think that this speed is ok considering our system having 16
cpus?.. Yes of course for we have to go for k-parallization.

Looking forwards to your comments and suggestion.

With regards,
 On 3 Apr 2014 18:04, "shamik chakrabarti"  wrote:

> Dear wien2k users,
>
>   We are working on a Li based silicate materials. We are trying to do
> simultaneous optimization of lattice parameters and atomic coordinates. For
> that we are using "Option 6" in volume optimize program while edited
> optimized.job to perform simultaneous force minimization. The calculation
> was run smoothly up to 2 structures i.e, case_abc_1.0 and case_abc_2.0.
> Then due to power failure the calculation was remain stopped for several
> hours.. We restarted the calculation by putting # to the two lines
> corresponding to 1st and 2nd structure in optimize.job...such that the
> calculation starts from the structure case_abc_3.0However, while
> running for 3rd structure the following display came in "show dayfile"
> option.
>
> ERROR status in case_abc___3.0
> mini   004035D9  Unknown   Unknown  Unknown
> libc.so.6  00349521ECDD  Unknown   Unknown  Unknown
> mini   004036E6  Unknown   Unknown  Unknown
> mini   00412AA6  MAIN__ 25  mini.f
> mini   0040C6AB  haupt_593  haupt.f
> mini   0041A39F  wrtscf_23
>  wrtscf.f
> mini   00451F5A  Unknown   Unknown  Unknown
> mini   00453C1E  Unknown   Unknown  Unknown
> Image  PCRoutineLineSource
>
> forrtl: severe (64): input conversion error, unit -5, file Internal
> Formatted Read
> 3.887u 0.013s 0:03.90 99.7% 0+0k 0+11912io 0pf+0w
> DSTART ENDS
> 3.879u 0.008s 0:03.88 99.7% 0+0k 0+11904io 0pf+0w
> DSTART ENDS
> 3.879u 0.011s 0:03.89 99.7% 0+0k 0+11904io 0pf+0w
> DSTART ENDS
> clmextrapol_lapw has generated a new case.clmdn
> 0.196u 0.007s 0:00.20 95.0% 0+0k 0+8032io 0pf+0w
> 3.888u 0.015s 0:03.90 99.7% 0+0k 0+13528io 0pf+0w
> DSTART ENDS
> running dstart in single mode
> clmextrapol_lapw has generated a new case.clmup
> 0.196u 0.010s 0:00.20 100.0% 0+0k 0+8032io 0pf+0w
> 3.929u 0.020s 0:03.94 100.0% 0+0k 0+13528io 0pf+0w
> DSTART ENDS
> running dstart in single mode
> clmextrapol_lapw has generated a new case.clmsum
> 0.195u 0.004s 0:00.20 95.0% 0+0k 0+8032io 0pf+0w
> 3.923u 0.017s 0:03.94 99.7% 0+0k 0+13528io 0pf+0w
> DSTART ENDS
> running dstart in single mode
> 3.888u 0.003s 0:03.89 99.7% 0+0k 0+11904io 0pf+0w
> DSTART ENDS
> 3.895u 0.010s 0:03.91 99.7% 0+0k 0+11904io 0pf+0w
> DSTART ENDS
> 3.934u 0.010s 0:03.94 100.0% 0+0k 0+11912io 0pf+0w
> DSTART ENDS
>
>
> we are using wien2k 13.1
>
> What could be the possible reasons for this error?.any response in this
> regard will be fruitful for us. Thanks in advance.
>
> with regards,
> --
> Shamik Chakrabarti
> Senior Research Fellow
> Dept. of Physics & Meteorology
> Material Processing & Solid State Ionics Lab
> IIT Kharagpur
> Kharagpur 721302
> INDIA
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Errori in mini

2014-04-04 Thread Michael Sluydts

Hello,

Due to lack of information on what you are simulating and with which 
settings I think you will find more useful to perform the tests yourself 
where you vary the amount of CPUs used, that way you can see the actual 
difference.



Kind regards,

Michael Sluydts

shamik chakrabarti schreef op 4/04/2014 17:02:


Dear Prof. Blaha Sir,

To continue from your last suggestions we have noticed the timing of a 
single iteration. It is 10 minutes for 16 atoms/unit cell calculation.
Sir, do you think that this speed is ok considering our system having 
16 cpus?.. Yes of course for we have to go for k-parallization.


Looking forwards to your comments and suggestion.

With regards,

On 3 Apr 2014 18:04, "shamik chakrabarti" > wrote:


Dear wien2k users,

  We are working on a Li based silicate materials. We are
trying to do simultaneous optimization of lattice parameters and
atomic coordinates. For that we are using "Option 6" in volume
optimize program while edited optimized.job to perform
simultaneous force minimization. The calculation was run smoothly
up to 2 structures i.e, case_abc_1.0 and case_abc_2.0. Then due to
power failure the calculation was remain stopped for several
hours.. We restarted the calculation by putting # to the two lines
corresponding to 1st and 2nd structure in optimize.job...such that
the calculation starts from the structure case_abc_3.0However,
while running for 3rd structure the following display came in
"show dayfile" option.

ERROR status in case_abc___3.0
mini   004035D9  Unknown Unknown  Unknown
libc.so.6  00349521ECDD  Unknown Unknown  Unknown
mini   004036E6  Unknown Unknown  Unknown
mini   00412AA6  MAIN__ 25  mini.f
mini   0040C6AB  haupt_593  haupt.f
mini   0041A39F  wrtscf_  23  wrtscf.f
mini   00451F5A  Unknown Unknown  Unknown
mini   00453C1E  Unknown Unknown  Unknown
Image  PCRoutine  LineSource
forrtl: severe (64): input conversion error, unit -5, file
Internal Formatted Read
3.887u 0.013s 0:03.90 99.7%0+0k 0+11912io 0pf+0w
DSTART ENDS
3.879u 0.008s 0:03.88 99.7%0+0k 0+11904io 0pf+0w
DSTART ENDS
3.879u 0.011s 0:03.89 99.7%0+0k 0+11904io 0pf+0w
DSTART ENDS
clmextrapol_lapw has generated a new case.clmdn
0.196u 0.007s 0:00.20 95.0%0+0k 0+8032io 0pf+0w
3.888u 0.015s 0:03.90 99.7%0+0k 0+13528io 0pf+0w
DSTART ENDS
running dstart in single mode
clmextrapol_lapw has generated a new case.clmup
0.196u 0.010s 0:00.20 100.0%0+0k 0+8032io 0pf+0w
3.929u 0.020s 0:03.94 100.0%0+0k 0+13528io 0pf+0w
DSTART ENDS
running dstart in single mode
clmextrapol_lapw has generated a new case.clmsum
0.195u 0.004s 0:00.20 95.0%0+0k 0+8032io 0pf+0w
3.923u 0.017s 0:03.94 99.7%0+0k 0+13528io 0pf+0w
DSTART ENDS
running dstart in single mode
3.888u 0.003s 0:03.89 99.7%0+0k 0+11904io 0pf+0w
DSTART ENDS
3.895u 0.010s 0:03.91 99.7%0+0k 0+11904io 0pf+0w
DSTART ENDS
3.934u 0.010s 0:03.94 100.0%0+0k 0+11912io 0pf+0w
DSTART ENDS


we are using wien2k 13.1

What could be the possible reasons for this error?.any response in
this regard will be fruitful for us. Thanks in advance.

with regards,
-- 
Shamik Chakrabarti

Senior Research Fellow
Dept. of Physics & Meteorology
Material Processing & Solid State Ionics Lab
IIT Kharagpur
Kharagpur 721302
INDIA



___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html


Re: [Wien] Errori in mini

2014-04-04 Thread shamik chakrabarti
Dear Michael,

Thank you for your reply. Yes we are going to try for maximizing
performance of our system.

Thanks once again,

With regards,
 On 4 Apr 2014 20:38, "Michael Sluydts"  wrote:

>  Hello,
>
> Due to lack of information on what you are simulating and with which
> settings I think you will find more useful to perform the tests yourself
> where you vary the amount of CPUs used, that way you can see the actual
> difference.
>
>
> Kind regards,
>
> Michael Sluydts
>
> shamik chakrabarti schreef op 4/04/2014 17:02:
>
> Dear Prof. Blaha Sir,
>
> To continue from your last suggestions we have noticed the timing of a
> single iteration. It is 10 minutes for 16 atoms/unit cell calculation.
> Sir, do you think that this speed is ok considering our system having 16
> cpus?.. Yes of course for we have to go for k-parallization.
>
> Looking forwards to your comments and suggestion.
>
> With regards,
>  On 3 Apr 2014 18:04, "shamik chakrabarti"  wrote:
>
>> Dear wien2k users,
>>
>>We are working on a Li based silicate materials. We are trying to
>> do simultaneous optimization of lattice parameters and atomic coordinates.
>> For that we are using "Option 6" in volume optimize program while edited
>> optimized.job to perform simultaneous force minimization. The calculation
>> was run smoothly up to 2 structures i.e, case_abc_1.0 and case_abc_2.0.
>> Then due to power failure the calculation was remain stopped for several
>> hours.. We restarted the calculation by putting # to the two lines
>> corresponding to 1st and 2nd structure in optimize.job...such that the
>> calculation starts from the structure case_abc_3.0However, while
>> running for 3rd structure the following display came in "show dayfile"
>> option.
>>
>>  ERROR status in case_abc___3.0
>> mini   004035D9  Unknown   Unknown
>>  Unknown
>> libc.so.6  00349521ECDD  Unknown   Unknown
>>  Unknown
>> mini   004036E6  Unknown   Unknown
>>  Unknown
>> mini   00412AA6  MAIN__ 25  mini.f
>> mini   0040C6AB  haupt_593
>>  haupt.f
>> mini   0041A39F  wrtscf_23
>>  wrtscf.f
>> mini   00451F5A  Unknown   Unknown
>>  Unknown
>> mini   00453C1E  Unknown   Unknown
>>  Unknown
>> Image  PCRoutineLine
>>  Source
>> forrtl: severe (64): input conversion error, unit -5, file Internal
>> Formatted Read
>> 3.887u 0.013s 0:03.90 99.7% 0+0k 0+11912io 0pf+0w
>> DSTART ENDS
>> 3.879u 0.008s 0:03.88 99.7% 0+0k 0+11904io 0pf+0w
>> DSTART ENDS
>> 3.879u 0.011s 0:03.89 99.7% 0+0k 0+11904io 0pf+0w
>> DSTART ENDS
>> clmextrapol_lapw has generated a new case.clmdn
>> 0.196u 0.007s 0:00.20 95.0% 0+0k 0+8032io 0pf+0w
>> 3.888u 0.015s 0:03.90 99.7% 0+0k 0+13528io 0pf+0w
>> DSTART ENDS
>> running dstart in single mode
>> clmextrapol_lapw has generated a new case.clmup
>> 0.196u 0.010s 0:00.20 100.0% 0+0k 0+8032io 0pf+0w
>> 3.929u 0.020s 0:03.94 100.0% 0+0k 0+13528io 0pf+0w
>> DSTART ENDS
>> running dstart in single mode
>> clmextrapol_lapw has generated a new case.clmsum
>> 0.195u 0.004s 0:00.20 95.0% 0+0k 0+8032io 0pf+0w
>> 3.923u 0.017s 0:03.94 99.7% 0+0k 0+13528io 0pf+0w
>> DSTART ENDS
>> running dstart in single mode
>> 3.888u 0.003s 0:03.89 99.7% 0+0k 0+11904io 0pf+0w
>> DSTART ENDS
>> 3.895u 0.010s 0:03.91 99.7% 0+0k 0+11904io 0pf+0w
>> DSTART ENDS
>> 3.934u 0.010s 0:03.94 100.0% 0+0k 0+11912io 0pf+0w
>> DSTART ENDS
>>
>>
>> we are using wien2k 13.1
>>
>>  What could be the possible reasons for this error?.any response in this
>> regard will be fruitful for us. Thanks in advance.
>>
>>  with regards,
>> --
>> Shamik Chakrabarti
>> Senior Research Fellow
>> Dept. of Physics & Meteorology
>> Material Processing & Solid State Ionics Lab
>> IIT Kharagpur
>> Kharagpur 721302
>> INDIA
>>
>
>
> ___
> Wien mailing 
> listw...@zeus.theochem.tuwien.ac.athttp://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:  
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
>
>
> ___
> Wien mailing list
> Wien@zeus.theochem.tuwien.ac.at
> http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
> SEARCH the MAILING-LIST at:
> http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html
>
>
___
Wien mailing list
Wien@zeus.theochem.tuwien.ac.at
http://zeus.theochem.tuwien.ac.at/mailman/listinfo/wien
SEARCH the MAILING-LIST at:  
http://www.mail-archive.com/wien@zeus.theochem.tuwien.ac.at/index.html