[Pw_forum] Fwd: Call for candidacy at PIIM Laboratory - CNRS position
> From: Yves Ferro > Date: December 7, 2012 9:16:13 GMT+01:00 > Subject: Call for candidacy at PIIM Laboratory - CNRS position > > > Dear QE users, > > The PIIM Laboratory, Marseille, France, seeks candidates for a > permanent position in the area of atomic-scale modeling of > condensed matter systems. Applicants will take part to a national > contest aimed at selecting CNRS (_http://www.cnrs.fr www.cnrs.fr/>_) permanent researchers upon evaluation operated by a > national committee. > > The selection process will take place in two different steps > (January 2013 and May/June 2013). This selection will be based on > past accomplishments, validity of the proposed research project > crafted with the Laboratory, and an oral presentation. > > Prior to the first step, the candidate should contact us and send > us a CV. Then he/she will be invite in our Lab to craft with us a > suitable project to be defended before the CNRS committee. The > deadline for the project/candidacy submission to the CNRS is > January 13th. > > Our group works in the domain of gas-surface interactions, chemical > reactivity and spectroscopic modeling related to condensed matter. > The applications we are dealing with are about Fusion Science and > the plasma-surface interactions which will take place in the future > ITER Tokamak. The materials of interest are tungsten, beryllium and > their oxides interacting with hydrogen. Our activities are strongly > committed to European programs and supported by the EFDA and other > French funding agencies . Another part of our activities are also > related to the chemical reactivity in the interstellar medium. > > These activities are developed using DFT as implemented in its PW > formalisms (mostly the Quantum Espresso Package). We are looking > for a candidate with a good knowledge in DFT, development skills > especially for the implementation of new functionals or new > capability to better handle the phenomenon we are dealing with - > gas-surface interaction. We would also be interested in someone > with a good knowledge of heavy metal modeling, however not heavier > than W. > Also of secondary interest would be skills dynamic and excited > states computation . > > If interested, thank you to contact, > > Yves Ferro > yves.ferro at univ-amu.fr > + 33 4 91 28 27 09 > > _ > > Dr Yves Ferro > Aix-Marseille Universit? > Centre de Saint J?r?me > Laboratoire PIIM - UMR 7345 - service 252 > 13397 Marseille cedex 20 > > yves.ferro at univ-amu.fr > Tel : + 33 4 91 28 27 09 > _ --- Paolo Giannozzi, Dept of Chemistry&Physics&Environment, Univ. Udine, via delle Scienze 208, 33100 Udine, Italy Phone +39-0432-558216, fax +39-0432-558222
[Pw_forum] set_hubbard_l.f90
Thanks a lot Burak. With regards, Prasenjit On 7 December 2012 21:11, Burak Himmetoglu wrote: > Dear Prasenjit, > > You can find set_hubbard_l.f90 in the folder flib. > > Best, > Burak > > On Fri, Dec 7, 2012 at 2:00 PM, Prasenjit Ghosh gmail.com>wrote: > >> >> >> Dear all, >> >> In the earlier versions of QE, there used to be a file called >> set_hubbard_l.f90 where one can add elements for which DFT+U is not >> configured. However, in espresso-5.0.2, I do not find the routine in >> PW/src.can anyone say where it is or what has happened to it? >> >> With regards, >> >> Prasenjit >> >> -- >> PRASENJIT GHOSH, >> IISER Pune, >> First floor, Central Tower, Sai Trinity Building >> Garware Circle, Sutarwadi, Pashan >> Pune, Maharashtra 411021, India >> >> Phone: +91 (20) 2590 8203 >> Fax: +91 (20) 2589 9790 >> >> ___ >> Pw_forum mailing list >> Pw_forum at pwscf.org >> http://pwscf.org/mailman/listinfo/pw_forum >> > > > > -- > Burak Himmetoglu > Post-Doctoral Research Associate > University of Minnesota MN, USA > > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://pwscf.org/mailman/listinfo/pw_forum > -- PRASENJIT GHOSH, IISER Pune, First floor, Central Tower, Sai Trinity Building Garware Circle, Sutarwadi, Pashan Pune, Maharashtra 411021, India Phone: +91 (20) 2590 8203 Fax: +91 (20) 2589 9790 -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/808a026f/attachment-0001.html
[Pw_forum] set_hubbard_l.f90
Dear all, In the earlier versions of QE, there used to be a file called set_hubbard_l.f90 where one can add elements for which DFT+U is not configured. However, in espresso-5.0.2, I do not find the routine in PW/src.can anyone say where it is or what has happened to it? With regards, Prasenjit -- PRASENJIT GHOSH, IISER Pune, First floor, Central Tower, Sai Trinity Building Garware Circle, Sutarwadi, Pashan Pune, Maharashtra 411021, India Phone: +91 (20) 2590 8203 Fax: +91 (20) 2589 9790 -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/3764877d/attachment.html
[Pw_forum] problems with examples for bands.x
Dear QE developers, When I run the examples for PP in version 5.0.2, I get failures for the runs with bands.x. example01 in si.bands.out crashes with: %% Error in routine bands (1): gamma_only case not implemented %% The same happens in example04 in pt.bands.out and example06 in Fe.bands.out. In none of these cases is the calculation gamma_only, so this looks like a bug in bands.x. David Strubbe MIT -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/63ac612f/attachment.html
[Pw_forum] Further reduce disk IO for ph.x
On Fri, 2012-12-07 at 17:25 +1100, Yao Yao wrote: > For these disk IO, is it writing output files or swapping out data to save > RAM? the code writes to file in order to save both RAM and CPU time. In particular, it keeps in memory at a given time only data (\psi(k), \psi(k+q), \delta V(q)\psi(k), \delta\psi(k)) for a single k-point. It also saves some intermediate results. > Since the total output file size is small compared to the disk IO bandwidth, > I > guess the answer is the latter one, right? If yes, where is the data swap > file > located? there are many files, not a single one. They are all written into a subdirectory of "outdir"/"prefix".save > if it's possible to avoid swapping in the expense of more RAM consumption, > that's also OK for me. it is definitely possible, but it requires some programming work. The code "as is" uses a lot of disk space. Note however that modern operating systems tend to keep files into RAM as much as possible, so if you have enough RAM. it may not make such a big difference whether you keep things into RAM or if you write them to file. P. -- Paolo Giannozzi, IOM-Democritos and University of Udine, Italy
[Pw_forum] Further reduce disk IO for ph.x
Dear All, I have been working with a lot of phononic calculations with ph.x. It worked very well. However, ph.x do a lot of writing. On a 48-core compute node, it takes ~35MB/s disk IO band width (estimated from network traffic of that node, which is without local disk). I tried "reduce_dio" option, which reduce the rate to ~20MB/s. However this is still not enough for me. Our cluster has ~40 such nodes. If I run more than ~7 jobs concurrently, my jobs will deplete the disk IO bandwidth. Then everything will be very slow. A simple "ls" command will take ~10 seconds to return. I'm seeking a way to further reduce disk IO for ph.x. I don't need recover though I have limited run duration per job, because I can continue with "start_q" and "last_q", which is very nice. For these disk IO, is it writing output files or swapping out data to save RAM? Since the total output file size is small compared to the disk IO bandwidth, I guess the answer is the latter one, right? If yes, where is the data swap file located? Can I customise the path? I want to put it in /dev/shm, which is a 48GB RAM disk (I have far more RAM than QE needs). On the other hand, if it's possible to avoid swapping in the expense of more RAM consumption, that's also OK for me. I guess putting the whole folder in /dev/shm may work but that's probably my last option because it requires copy back and forth. Thank you very much. Regards, Yao -- Yao Yao PhD. Candidate 3rd Generation Strand ARC Photovoltaics Centre of Excellence University of New South Wales Sydney, NSW Australia
[Pw_forum] Compute Debye temperature
Dear all, when computing *superconductivity*, I am puzzled by following questions: *1. *How to compute the *Debye temperature* in PWSCF? *2. *Can *lambda.x* or (and) *debye.x* calculate the Debye temperature? *3. *What are the *differences between lambda.x and debye.x*? What* theory (or formula) *are they depend on? Any comments and suggestions are welcome? Thank you! -- Yue-Wen FANG, PhD candidate Key Laboratory of Polar Materials and Devices, Ministry of Education<http://clpm.ecnu.edu.cn/> East China Normal University <http://www.ecnu.edu.cn/english/> Phone: +86-18321726131 I will persist until I succeed! -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/5fa0fe02/attachment-0001.html
[Pw_forum] (no subject)
Dear all, when computing *superconductivity*, I am puzzled by following questions: *1. *How to compute the *Debye temperature* in PWSCF? *2. *Can *lambda.x* or (and) *debye.x* calculate the Debye temperature? *3. *What are the *differences between lambda.x and debye.x*? What* theory (or formula) *are they depend on? Any comments and suggestions are welcome? Thank you! -- Yue-Wen FANG, PhD candidate Key Laboratory of Polar Materials and Devices, Ministry of Education<http://clpm.ecnu.edu.cn/> East China Normal University <http://www.ecnu.edu.cn/english/> Phone: +86-18321726131 I will persist until I succeed! -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/ae6aebe2/attachment.html
[Pw_forum] old references for examples
On Wed, 2012-12-05 at 18:58 -0500, David Strubbe wrote: > If you could provide reference results in the release that were from > the current version, it would be much easier to decide whether there > are any actual problems being detected or not. I am currently updating reference results for phojnon examples to the latest version. It takes several hours of work, since all files have to be verified one by one. P. -- Paolo Giannozzi, IOM-Democritos and University of Udine, Italy
[Pw_forum] set_hubbard_l.f90
Dear Prasenjit, You can find set_hubbard_l.f90 in the folder flib. Best, Burak On Fri, Dec 7, 2012 at 2:00 PM, Prasenjit Ghosh wrote: > > > Dear all, > > In the earlier versions of QE, there used to be a file called > set_hubbard_l.f90 where one can add elements for which DFT+U is not > configured. However, in espresso-5.0.2, I do not find the routine in > PW/src.can anyone say where it is or what has happened to it? > > With regards, > > Prasenjit > > -- > PRASENJIT GHOSH, > IISER Pune, > First floor, Central Tower, Sai Trinity Building > Garware Circle, Sutarwadi, Pashan > Pune, Maharashtra 411021, India > > Phone: +91 (20) 2590 8203 > Fax: +91 (20) 2589 9790 > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://pwscf.org/mailman/listinfo/pw_forum > -- Burak Himmetoglu Post-Doctoral Research Associate University of Minnesota MN, USA -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/70a97f95/attachment.html
[Pw_forum] polarizability calculation
Dear Users, I'm interesting in calculation of static and dynamic electronic polarization properties. If I understood User's Guide correctly, there are two way: using ph.x (fpol) and vdw.x codes. Could you advice some papers with theoretical background of these implementation? Which is better method for nanotubes, graphite, and gpaphene? The other problem is connected with the example34. No modifications of standard input files were done. The scf calculation was ok, but effective potential calculation had problems with stability: Program VdW v.4.3.2starts on 6Dec2012 at 17:43:23 This program is part of the open-source Quantum ESPRESSO suite for quantum simulation of materials; please cite "P. Giannozzi et al., J. Phys.:Condens. Matter 21 395502 (2009); URL http://www.quantum-espresso.org";, in publications or presentations arising from this work. More details at http://www.quantum-espresso.org/wiki/index.php/Citing_Quantum-ESPRESSO Parallel version (MPI & OpenMP), running on 4 processor cores Number of MPI processes: 1 Threads/MPI process: 4 Info: using nr1, nr2, nr3 values from input Info: using nr1s, nr2s, nr3s values from input file H.vbc.UPF: wavefunction(s) 1S renormalized G-vector sticks info sticks: dense smooth PW G-vecs:dense smooth PW Sum1369136934938401384014801 Effective Potential Calculation Charge difference due to FFT 0.01787941 Charge difference due to V_eff (initial) 0.01204687 iter # 1 charge diff. 0.82753376 thresh_veff 0.0096 iter # 2 charge diff. 0.13281985 thresh_veff 0.0096 iter # 3 charge diff. 0.84867147 thresh_veff 0.0096 **unstability happens** iter # 4 charge diff. 0.21607526 thresh_veff 0.0096 iter # 5 charge diff. 0.87173963 thresh_veff 0.0096 **unstability happens** WARNING: 1 eigenvalues not converged WARNING: 1 eigenvalues not converged WARNING: 1 eigenvalues not converged WARNING: 1 eigenvalues not converged WARNING: 1 eigenvalues not converged WARNING: 1 eigenvalues not converged iter #81 charge diff. 1.99492123 thresh_veff 0.0096 **unstability happens** Thanks in advance, -- Olga Sedelnikova Nikolaev Institute of Inorganic Chemistry of SB RAS -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/f4f0506a/attachment.html
[Pw_forum] polarizability calculation
On Fri, 2012-12-07 at 11:45 +0700, Olga Sedelnikova wrote: > The other problem is connected with the example34. vdw.x is unmantained and longer distributed. P. -- Paolo Giannozzi, IOM-Democritos and University of Udine, Italy
[Pw_forum] Compute Debye temperature
Please do not mail again -again. 1. with PWSCF package, find utility QHA, read and install. see this post: http://www.democritos.it/pipermail/pw_forum/2011-October/022205.html. I also suggest, please read this good article also, http://arxiv.org/abs/1112.4977v1 Further, read solid state Physics, like : "Solid State Physics: Neil W. Ashcroft, N. David *Mermin*<http://www.google.com/url?url=http://www.amazon.com/Solid-State-Physics-Neil-Ashcroft/dp/0030839939&rct=j&sa=U&ei=UAHCUOSNN4fQygGplYDgCQ&ved=0CBQQFjAA&q=Ashroff+and+Mermin%22+&usg=AFQjCNEz27QoQu--Ekg2u8b7kxedW3_FuQ> " bests sanjeev On Fri, Dec 7, 2012 at 3:00 AM, Yue-Wen Fang wrote: > Dear all, > > when computing *superconductivity*, I am puzzled by following questions: > *1. *How to compute the *Debye temperature* in PWSCF? > *2. *Can *lambda.x* or (and) *debye.x* calculate the Debye temperature? > *3. *What are the *differences between lambda.x and debye.x*? What* theory > (or formula) *are they depend on? > Any comments and suggestions are welcome? > Thank you! > > > > -- > > > Yue-Wen FANG, PhD candidate > Key Laboratory of Polar Materials and Devices, Ministry of > Education<http://clpm.ecnu.edu.cn/> > East China Normal University <http://www.ecnu.edu.cn/english/> > Phone: +86-18321726131 > I will persist until I succeed! > > > > > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://pwscf.org/mailman/listinfo/pw_forum > -- Dept. of Physics Michigan Technological University 1400 Townsend Drive Houghton MI 49931, USA -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/7067cbbc/attachment.html
[Pw_forum] binding energy for bundle nanotubes
Dear QE developers, how we can calculate the binding Energy for SiC bundle nanotubes? the sum of total Energy of isolated nanotubes in unit cell - Total Energy of bundle nanotube, Or the sum of total energy of isolated Si and C atoms in unit cell - Total Energy of SiC bundle nanotube? with regards, khodadad -- next part -- An HTML attachment was scrubbed... URL: http://pwscf.org/pipermail/pw_forum/attachments/20121207/859e6a66/attachment.html
[Pw_forum] Further reduce disk IO for ph.x
On Fri, Dec 7, 2012 at 7:25 AM, Yao Yao wrote: > Dear All, > I have been working with a lot of phononic calculations with ph.x. It worked > very well. However, ph.x do a lot of writing. On a 48-core compute node, it > takes ~35MB/s disk IO band width (estimated from network traffic of that node, > which is without local disk). I tried "reduce_dio" option, which reduce the > rate to ~20MB/s. > > However this is still not enough for me. Our cluster has ~40 such nodes. If I > run more than ~7 jobs concurrently, my jobs will deplete the disk IO > bandwidth. Then everything will be very slow. A simple "ls" command will take > ~10 seconds to return. > > I'm seeking a way to further reduce disk IO for ph.x. I don't need recover > though I have limited run duration per job, because I can continue with > "start_q" and "last_q", which is very nice. > > For these disk IO, is it writing output files or swapping out data to save > RAM? > Since the total output file size is small compared to the disk IO bandwidth, I > guess the answer is the latter one, right? If yes, where is the data swap file > located? Can I customise the path? I want to put it in /dev/shm, which is a > 48GB RAM disk (I have far more RAM than QE needs). On the other hand, if it's > possible to avoid swapping in the expense of more RAM consumption, that's also > OK for me. > > I guess putting the whole folder in /dev/shm may work but that's probably my > last option because it requires copy back and forth. no. if you have sufficient space in /dev/shm, this should be your *first* option. this is by far the fastest, and most elegant way to handle the disk i/o issue on a diskless node. the stage in and stage out of data is a minor inconvenience that you'll have to deal with. however, please be sure to erase all your files in /dev/shm at the end of the job, so you won't make your sysadmin unhappy and come up with ways to prohibit this. as a side note, there are quite a few machines (particularly high-end supercomputers), where this kind of procedure is required and part of batch processing, since the compute nodes have no access to your home directory at all (for reasons of efficiency). thus you have to tell the batch system which files to stage in and stage out as part of the job. axel. > > Thank you very much. > > Regards, > Yao > > -- > Yao Yao > PhD. Candidate > 3rd Generation Strand > ARC Photovoltaics Centre of Excellence > University of New South Wales > Sydney, NSW Australia > ___ > Pw_forum mailing list > Pw_forum at pwscf.org > http://pwscf.org/mailman/listinfo/pw_forum -- Dr. Axel Kohlmeyer akohlmey at gmail.com http://goo.gl/1wk0 International Centre for Theoretical Physics, Trieste. Italy.