Re: [Pw_forum] xml format for dynamical matrix

2015-06-13 Thread Lorenzo Paulatto
On 14/06/2015 08:36, Umesh Roy wrote:
> Dear All,
>I want to calculate phonon  for q grid nq1=8, nq2=8, nq3=8
> for Gold. As I run the program for phonon , the dynamical matrices are
> written in the xml format. So could not able to get the interatomic
> force constant(IFC) from there.  Why are dynamical matrices written in
> the xml format? How to get IFC from there? Please help.

If I remember correctly, they are written in xml format if you use spin
orbit. But q2r can read them, it does not prevent you to generate the
force constants.

As always when asking for help you should provide all the information
you dispose, in order to get a meaningful answer. In particular:
1. what you did (i.e. input files, command lines)
2. what you got (i.e. output files, matdyn files)
3. what you expected to get
4. why you think 3 and 4 are different

kind regards



-- 
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115, 4 place Jussieu 75252 Paris Cédex 05 

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


[Pw_forum] xml format for dynamical matrix

2015-06-13 Thread Umesh Roy
Dear All,
   I want to calculate phonon  for q grid nq1=8, nq2=8, nq3=8 for
Gold. As I run the program for phonon , the dynamical matrices are written
in the xml format. So could not able to get the interatomic force
constant(IFC) from there.  Why are dynamical matrices written in the xml
format? How to get IFC from there? Please help.

   Thank you in advance.










*-Umesh
Chandra RoyResearch Scholar, School of Physical SciencesJawaharlal Nehru
University, New Delhi-110067,*

*India.*

*Email:umesh24...@gmail.com *
*Mobile:+919868022722*
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] Bug (or not) with epsil = .false. in PH (trunk version)

2015-06-13 Thread Samuel Poncé
Dear all,

I found out that it was impossible de make a phonon calculation without
calculating the Born effective charge in a semi-conductor even if we set
epsil = .false.

This is due to the routine prepare_q.f90 that is called inside do_phonon.f90
In the routine there is
 IF ( lgamma ) THEN
!
IF ( .NOT. lgauss ) THEN
   !
   ! ... in the case of an insulator at q=0 one has to calculate
   ! ... the dielectric constant and the Born eff. charges
   ! ... the other flags depend on input
   !
   epsil = .TRUE.
   zeu = .TRUE.
   zue = .TRUE.


This means that if we compute q=Gamma and if we do not have
gaussian-smearing (i.e. an semi-cond or insulator), then epsil is
automatically set to TRUE.

I know that physically one should have such LO/TO splitting but the user
should be able to choose.

Maybe this forcing could be reported in the input variable ? or simply put
the default value to .TRUE. instead of false and not enforcing that rule?

What do you think?

Best,

Samuel Ponce,
Department of Materials, University of Oxford
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

Re: [Pw_forum] [qe-gpu]

2015-06-13 Thread Filippo Spiga
Dear Anubhav,

run in parallel, 2 MPI and make sure CUDA_VISIBLE_DEVICES is set such

MPI rank 0 -> GPU id 1 (K20)
MPI rank 1 -> GPU id 2 (K20)

Those K20 GPU are active cooled cards, how many sockets this server (or 
workstation?) have?

F
 
> On Jun 13, 2015, at 11:08 AM, Anubhav Kumar  wrote:
> 
> Dear QE users
> 
> I have configured qe-gpu 14.10.0 with espresso-5.1.2.Parallel compilation
> was successful, but when i run ./pw-gpu.x it gives the following output
> 
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> 
> ***
> 
>   GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
>   (parallel: Y , MAGMA : N )
> 
> ***
> 
> 
> Program PWSCF v.5.1.2 starts on 13Jun2015 at 15:23:59
> 
> This program is part of the open-source Quantum ESPRESSO suite
> for quantum simulation of materials; please cite
> "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
>  URL http://www.quantum-espresso.org";,
> in publications or presentations arising from this work. More details at
> http://www.quantum-espresso.org/quote
> 
> Parallel version (MPI & OpenMP), running on  24 processor cores
> Number of MPI processes: 1
> Threads/MPI process:24
> Waiting for input...
> 
> 
> However when i again run the same command, it gives
> 
> ***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)
> 
> Program received signal SIGSEGV: Segmentation fault - invalid memory
> reference.
> 
> Backtrace for this error:
> #0  0x7FB5001B57D7
> #1  0x7FB5001B5DDE
> #2  0x7FB4FF4C4D3F
> #3  0x7FB4F3391D40
> #4  0x7FB4F33666C3
> #5  0x7FB4F3364C80
> #6  0x7FB4F33759EF
> #7  0x7FB4F345CA1F
> #8  0x7FB4F345CD2F
> #9  0x7FB500B7DBCC
> #10  0x7FB500B7094F
> #11  0x7FB500B7CC56
> #12  0x7FB500B81410
> #13  0x7FB500B7507B
> #14  0x7FB500B6179D
> #15  0x7FB500B940A0
> #16  0x7FB5009BA047
> #17  0x8A4EA3 in phiGemmInit
> #18  0x76F55E in initcudaenv_
> #19  0x66AE90 in __mp_MOD_mp_start at mp.f90:184
> #20  0x66E192 in __mp_world_MOD_mp_world_start at mp_world.f90:58
> #21  0x66DCC0 in __mp_global_MOD_mp_startup at mp_global.f90:65
> #22  0x4082A0 in pwscf at pwscf.f90:23
> #23  0x7FB4FF4AFEC4
> Segmentation fault
> 
> Kindly help me out in solving the problem. My GPU details are
> 
> +--+
> | NVIDIA-SMI 346.46 Driver Version: 346.46 |
> |---+--+--+
> | GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
> ECC |
> | Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
> M. |
> |===+==+==|
> |   0  Tesla C2050 Off  | :02:00.0  On |  
> 0 |
> | 30%   62C   P12N/A /  N/A | 87MiB /  2687MiB |  0% 
> Default |
> +---+--+--+
> |   1  Tesla K20c  Off  | :83:00.0 Off |  
> 0 |
> | 42%   55CP046W / 225W |   4578MiB /  4799MiB |  0% 
> Default |
> +---+--+--+
> |   2  Tesla K20c  Off  | :84:00.0 Off |  
> 0 |
> | 34%   46CP817W / 225W | 14MiB /  4799MiB |  0% 
> Default |
> +---+--+--+
> 
> +-+
> | Processes:   GPU
> Memory |
> |  GPU   PID  Type  Process name   Usage  
>   |
> |=|
> |1 27680C   ./pw-gpu.x   
> 4563MiB |
> +-+
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

--
Mr. Filippo SPIGA, M.Sc.
http://fspiga.github.io ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."



___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum

[Pw_forum] [qe-gpu]

2015-06-13 Thread Anubhav Kumar
Dear QE users

I have configured qe-gpu 14.10.0 with espresso-5.1.2.Parallel compilation
was successful, but when i run ./pw-gpu.x it gives the following output

***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)

 ***

   GPU-accelerated Quantum ESPRESSO (svn rev. unknown)
   (parallel: Y , MAGMA : N )

 ***


 Program PWSCF v.5.1.2 starts on 13Jun2015 at 15:23:59

 This program is part of the open-source Quantum ESPRESSO suite
 for quantum simulation of materials; please cite
 "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009);
  URL http://www.quantum-espresso.org";,
 in publications or presentations arising from this work. More details at
 http://www.quantum-espresso.org/quote

 Parallel version (MPI & OpenMP), running on  24 processor cores
 Number of MPI processes: 1
 Threads/MPI process:24
 Waiting for input...


However when i again run the same command, it gives

***WARNING: unbalanced configuration (1 MPI per node, 3 GPUs per node)

Program received signal SIGSEGV: Segmentation fault - invalid memory
reference.

Backtrace for this error:
#0  0x7FB5001B57D7
#1  0x7FB5001B5DDE
#2  0x7FB4FF4C4D3F
#3  0x7FB4F3391D40
#4  0x7FB4F33666C3
#5  0x7FB4F3364C80
#6  0x7FB4F33759EF
#7  0x7FB4F345CA1F
#8  0x7FB4F345CD2F
#9  0x7FB500B7DBCC
#10  0x7FB500B7094F
#11  0x7FB500B7CC56
#12  0x7FB500B81410
#13  0x7FB500B7507B
#14  0x7FB500B6179D
#15  0x7FB500B940A0
#16  0x7FB5009BA047
#17  0x8A4EA3 in phiGemmInit
#18  0x76F55E in initcudaenv_
#19  0x66AE90 in __mp_MOD_mp_start at mp.f90:184
#20  0x66E192 in __mp_world_MOD_mp_world_start at mp_world.f90:58
#21  0x66DCC0 in __mp_global_MOD_mp_startup at mp_global.f90:65
#22  0x4082A0 in pwscf at pwscf.f90:23
#23  0x7FB4FF4AFEC4
Segmentation fault

Kindly help me out in solving the problem. My GPU details are

+--+
| NVIDIA-SMI 346.46 Driver Version: 346.46 |
|---+--+--+
| GPU  NamePersistence-M| Bus-IdDisp.A | Volatile Uncorr.
ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap| Memory-Usage | GPU-Util  Compute
M. |
|===+==+==|
|   0  Tesla C2050 Off  | :02:00.0  On |  
 0 |
| 30%   62C   P12N/A /  N/A | 87MiB /  2687MiB |  0% 
Default |
+---+--+--+
|   1  Tesla K20c  Off  | :83:00.0 Off |  
 0 |
| 42%   55CP046W / 225W |   4578MiB /  4799MiB |  0% 
Default |
+---+--+--+
|   2  Tesla K20c  Off  | :84:00.0 Off |  
 0 |
| 34%   46CP817W / 225W | 14MiB /  4799MiB |  0% 
Default |
+---+--+--+

+-+
| Processes:   GPU
Memory |
|  GPU   PID  Type  Process name   Usage  
   |
|=|
|1 27680C   ./pw-gpu.x   
4563MiB |
+-+
___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum


Re: [Pw_forum] Error in q2r.x during phonon calculation

2015-06-13 Thread Lorenzo Paulatto
I recommend you repeat the calculation of the 3rd point using the
first_q and last_q input variables (check the manual for more info)

good work!

On 13/06/2015 06:07, nirav msc wrote:
> Dear QE Users and Developers,
>
> I am using Quantum espresso version 5.0.2. While performing phonon
> calculations, my run got interrupted leaving dyn3 calculations midway,
> i resumed my calculations to finish dyn3 computations.
>
> while continuing phonon calculations with q2r.x I got the following error.
>
> "*At line 272 of file io_dyn_mat_old.f90 (unit = 1, file = 'Fe3Ni.dyn3')**
> *
> *  Fortran run time error: Bad real number in item 1 of list input*"
>  
>  My input ph.in  file as follow. 
>
> Phonons of Fe3Ni
>  &inputph
>   tr2_ph=1.0d-12
>   prefix="Fe3Ni"
>   recover=.true.,
>   ldisp=.true.,
>   nq1=2, nq2=2, nq3=2
>   amass(1)=55.845,
>   amass(2)=58.6934,
>   outdir='/tmp/',
>   fildyn='Fe3Ni.dyn',
>
> At the end of my phonon calculation I have got the very strange
> "dyn.3" file in which during diagonalizing the dynamical matrix
> for q = (0.0 -1.0 0.0), it gives the values as
> follow.
>
> omega(1) = -2974966.704994 [THz]=  [cm-1]
>
> omega(2) = -2974966.704994 [THz]=  [cm-1]
>
> i.e. code is not able to print the frequency in cm-1 due to extremely
> large value.
>
> kindly suggest, where I am going to wrong, Whether it is due to
> restart of the Phonon calculations or due to wrong input or due to the
> structure related problem?
>
> Thanks in advance
>
> Your help will be highly appriciated for the same.
>
>
> Nirav Pandya
> Ph.D. student
> Gujarat Univesity,
> India
>
>  
>
>
> ___
> Pw_forum mailing list
> Pw_forum@pwscf.org
> http://pwscf.org/mailman/listinfo/pw_forum

-- 
Dr. Lorenzo Paulatto
IdR @ IMPMC -- CNRS & Université Paris 6
+33 (0)1 44 275 084 / skype: paulatz
http://www.impmc.upmc.fr/~paulatto/
23-24/4é16 Boîte courrier 115, 4 place Jussieu 75252 Paris Cédex 05 

___
Pw_forum mailing list
Pw_forum@pwscf.org
http://pwscf.org/mailman/listinfo/pw_forum